The practical answer is no. There is an unimaginable amount of possible 3:30 minute vidoes—far more than the number of possible 4kb or even 40kb files.
To be fair, most of those possible vidoes are just noise. We don't have to be able to compress those because people don't care if one video of noise is different from another. We also don't have to reconstruct the video perfectly: as long as it looks more or less the same, the audience is happy. (This is called "lossy compression".)
But even with these caveats, there is no realistic method for compressing realistic 3:30 minute videos that well on a computer. We likely can't do all that much more than current compression algorithms without a different set of tradeoffs. (Like being better at some videos but worse at others.)
That said, a big part of how compression works is by relying on information already present when decompressing. This demo relies on having a particular kind of chip with certain capabilities (ie a CPU and a GPU) and presumably some standard library functions... etc.
How well could we "compress" videos if we had more information available when decompressing? Here's a fun thought experiment: what if we had a model of a human mind? We could then feed in a pretty sparse description and have the model fill in the details in a natural intuitive way. It would be very lossy, but the results would be compelling.
And you know what? That's a decent mental model of how speech works! If you just look at information content, spoken words are not very dense. But if I describe a scene you can imagine it almost as if you're seeing a video. This works because we both have the same sort of brain as well as shared experiences and intentions.
You can think of speech as incredibly effective—but also rather lossy—compression.
It could be very useful to deliberately pursue SUPER lossy compression. As long as no one can really tell based on the end result, it doesn't really matter.
For example, if you can only tell something was lossy by directly comparing two instances of the same video during playback, then that's probably good enough in most situations.
It occurred to me that we could compress the hell out of written works by translating them into some super dense language, and ultimately only retain the basics of the meaning/concepts/some of the writing style. Then can re-translate that back to whatever language we want to read it in.
For compressing pictures or videos, there could be some similar translation to a much more compact representation. Would probably rely on ML heavily though.
4K of English text is a couple of pages of a novel, enough to describe a character and a situation, maybe an interaction. A good writer can conjure up a whole world in 4K... but probably not a description of an arbitrary 3 and a half minutes of activity.
Nice insight you brought with the CPU and the standard libraries being a relevant factor, hadn't thought of that.
Your thought experiment sounds more like a "codec" than a procedural generation. I guess it is an arbitrary line given that we are using CPU, etc. But the bigger the decompressing "model" the further away from true 4k compression we are.
The Kolmogorov Complexity of a video (or any other data) is the size of the shortest program which outputs that video then halts. This 4k executable is similar in spirit, but also follows strict rules about efficiency: Kolmogorov complexity places no time limits on that shortest program, whereas this program must output pixels fast enough to make the video realtime.
Sorry, I thought it was obvious, but the question is:
Could procedural generation be used to achieve amazing compression rates given a currently impossible to code algorithm?
No, only very specific videos, like this particular one. The art is in finding a pretty video that you can render in 4kb, not making a pretty video and then reducing it to 4kb. The latter would most likely be impossible.
"39. Re graphics: A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures."
It's the pigeonhole principle; there are only a few long videos possibly encodable as short programs because there are only a few short programs in the first place. To get compression performance, one has to target an ever smaller subset of possible videos, which eventually starts becoming an AI-complete problem.
Sure. 2^4096 is 10^1233. Let's just look at dialogue. Even if you limit yourself to boring 5-word sentences with 2,000 possible words for each position (subject verb preposition adjective object), 5^2000 = 8.7 * 10^1397 which means in the very first sentence you've got 10^164 times as many videos as you could possibly index with only 4096 bits.
Late addition: I thought I fixed all the stupid math problems before I posted this, but it's still totally wrong. Even leaving aside the fact the English doesn't have 2,000 prepositions, which I just glossed over :)
A five-word sentence with 1000 options per word isn't 5^1000 but only 1000^5 = 10^15. If we break the movie into 5-second blocks we get 48 of them in a 4-minute movie so (10^15)^48 = 10^720 different movies, which is not bad but we're still 10^513 away. There are a lot more variations we could consider - different actors, costumes, sets, framing, color grading etc. and I think it's plausible that we could come up with enough features. Heck if you talk twice as fast, you could get (10^15)^(48*2) = 10^1440. But it's a lot bigger than I made it out to be.
Obviously it would be AI-complete. I didn't know that term, that's what I meant by currently impossible to code. I just learned my favorite term ever, thanks for that!.
Although disappointing, you seem to have the correct answer for my question.
The difference between procedural generation and a video is similar to the difference between raster and vector graphics. Demoscene intros like this are more like your computer giving a live performance from scratch than playing a movie. Ideas like video compression don't really apply. They create 3D models and textures from simple math functions and filters, make a world from them, add more math functions for camera movements, and play some synthesized music that's more akin to MIDI than MP3 (to put it simply).
I recently began making a function that can output the 2D lines of the walls of a house, with windows and different shapes (L, S, T), and inside walls that are generated with points and NESW directions. It was pretty fun and challenging, but now I have to move to 3D to make this base line become a level with windows and door.
The only thing I have to give this function, is the height/width ratio, some other ratio that define how large "corner holes" are in the LST configuration, the amount and relative position of windows and door, the starting point and NSEW direction for inside walls, and with all that, I could create a house of a story building with an inside. Of course it's not finished yet, and there isn't furniture of details, but you see that in theory, you can use procedural generation as a compression tool for human-designed structures, that no machine learning algorithm or autoencoder could really achieve.
If you associate this kind of algorithm into a well made openstreetmap database (think vector tiles which are used for GPS software), you could also recreate the whole world in 3D, with enough details to make a game that would not require that much disk memory. Recreating the roads, fences, parks, rivers, vegetation, elevation etc is difficult because it require a lot of tuning and geometry tricks, but it's very cheap in term of cpu cycle and disk.
The folks at outerra have begun making an actual software that lets you browse the entire planet in 3D. You can zoom in real time from space to 1cm. They don't have cities yet though, but they are planning for it. I want to make a game using such ambitious ideas, but it's not easy...
Well, it's not compressed, it's generated.
You could generate an endless video with less code, but it would most likely be uninteresting. Scene demos are interesting because it's art and direction and music generated from algorithms rather than creating those things and compressing them efficiently.
But, yes, at some level there is an idea of a dna seed and a process to create something much more profound, we as humanity haven't come close to cracking that, though.
I suspect that if at all possible to have an algorithm that can generate the seeds plus the process to expand them, then that algorithm would take orders of magnitude longer to run then there would be practical in any meaningful time scale.
Not visuals, but along a similar vein, random number generators with high dimensionality and equidistribution can be coerced into generating very specific output, given enough exploration of the output space.
For example, and output of all zeros, or the source for a a random number generator itself, or a zipped archive of a work of Shakespeare.
I hate geoblocking, but looking trough the site I only felt like facepalm. It's the same as poverty sucks so let's give 1000$ a month to everybody and somehow where the money will come from will be figured out by someone.
Universal minimum income is a lot nuanced then that.
It's meant to replace a lot of service like food stamps and housing vouchers and let the poor pick what they spend their money on and avoid bureaucracy. It's also meant to prevent the "welfare trap", an over-exaggerated but real problem, where getting a job and earning money can cost you more in benefits then you gain from the job.
There's been interesting studies lately on the effects of direct cash charity and a lot of study showing the effects of the earned income tax credit, 2 things which have strong similarities to universal income.
I don't think it makes sense to force business plans on companies.
And yet we've done it multiple times in the US. We routinely encounter situations in the media industry where a new medium is hated/feared by the entrenched players who try to use refusal to license content as a way to kill the new medium before it takes off. Up until very recently, the standard solution to this, in order to not have artifically-granted monopolies on content stifle technological innovation, was for Congress to impose mandatory licensing schemes on the copyright holders.
That's how cable TV originally got off the ground, for example; over-the-air broadcast networks didn't want to license their content to cable, but Congress imposed mandatory licensing on them. Result: brand-new multi-billion-dollar industry that kept those networks alive a while longer.
At first glance I think they would take off and be airborne. Other than that they would do whatever they were programmed to do.
I know, my comment is stupid, thought I should fit in.
Maybe we can use this comment as a place to give our best guesses for to the percentage of liars on the internet.
How many of the people claiming to have clicked a button few minutes / hours after it was out and having spectacular irreversible consequences to their lives is just a phony?
I would say between 40% and 90%, but don't know better than that.
> Why do we say homosexuality is primarily genetic if evolution is true?
In my opinion it must be to help the fitness of the community they live in. The communities with low likelihood to spawn a gay person are (were?) much more likely to go extinct. It can also be a basic property (side effect?) of the sexual appeal arms race.
There is also the case of suicides, when they happen to young people that hasn't reproduced yet. It can also be argued its for the benefit of the community.
So the thesis is that the primary force in evolution is not individual's fitness but it is the community's fitness.
Meh, it is not necessary to postulate that every part of behavior to which genetics contributes is a thing which has been selected for.
If a particular gene causes good things 95% of the time (based on other interactions or developmental factors or whatever) and catastrophic failures like heart disease or cancer or crippling depression 5% of the time (and it represents a local maxima, with no simple improvements possible), then it can easily come to dominate a gene-pool, so long as the 95%-benefit outweighs the 5%-failure. It doesn't mean the 5% failure has been "selected for", it just means that it wasn't worth weeding out.
Moths circle lights because of genetic factors which influence their tiny moth brain's development. It doesn't mean that circling-lights was a selected feature.
Yeah, that's definitely a possibility. What you describe is more or less the basic property/side effect I was talking about, but much better explained.
You're betting evolution is not that good. One can also bet it better than you can possibly imagine. I don't think you can be sure either way at this point, but your argument seems to be the most compelling at this point!
So what are the genetic advantages of being gay? And how come nature hasn't figured out a way to make these advantages without also producing gay siblings, which would seem to bring down the overall fitness of the genes?
Since the first comment didn't work, I'll try going into more detail.
Our genetic code doesn't include statements like "if (is_male) { try_to_have_sex_with_ladies(); } else { try_to_have_sex_with_dudes(); }".
Our genetic code produces proteins of different shapes and those shapes influence the development of the structure of our brains and their function and that brain generates feels of various kinds that lead to attractions of various kinds.
Evolution works by changing the coding of the genetic code which changes the shapes or amounts of proteins produced which changes the development... a local maximum happens when the final system produced (our feels of attraction) would be negatively affected by any individual mutation on the genetic code. There may be better feels-of-attraction systems you could build from scratch that would propagate the species better than ours, but you can't turn ours into this better system without making it worse for awhile.
Nature can't make arbitrary changes; it can only make incremental changes. And consequently, we frequently get stuck at local maximums rather than global maximums.
Birth order has been correlated with the statistical likelihood of an individual being gay (i.e., the more older siblings you have, the more likely you are to be gay). This could definitely pose an evolutionary benefit to a stable population that lacks the natural resources to support a sudden boom of growth. Case in point being pandas which I just watched a show about last night that put forth pandas' low reproductive rate as ensuring low competition for their precious bamboo.
See that just further goes to show my point. Where is the evidence of these assertions? Mainstream evolutionists have said that "group selection has been disproven" except possibly kin selection. So your example clashes with what they've said.
I personally think the benefit of the community is definitely one of the factors. And I think the evolutionists have it wrong. But why does the public just accept whatever they say? It hardly sounds like a settled theory.