Hacker News new | past | comments | ask | show | jobs | submit login

My quick and dirty interpretation after skimming that article: he misrepresented a hashing algo to non-technical people who didn't understand that a) it's a one-way function and b) even if it weren't, multiple inputs can still map to the same hash.



There's some more info/speculation on it here: https://www.spronck.net/sloot.html


> In his prototype, he faked his invention, which is why he refused to let anyone near it, and answered only in mystical vagueness to questions.

I was the happy attendant to a demo given to a wealthy friend who was asked to invest (alongside Pieper). I'll let Jan take the secret to his grave but the writer of TFA is spot on, he faked it, but he really did believe that he could make it work. It's a very sad story.


I met him a few times and after his death, I was contacted by a 'friend' (you never know; I just know for sure that he lived around the block from him, as did I at the time in that miserable town) of his who wanted to hire me to figure out the secret. They all thought it was real. But they missed the background to reason about it correctly, like Kolmogorov complexity. I don't think he was really seeing it as faking; he just thought he needed some more time to make it generally applicable, but the idea was basically a re-applied compression; you had 4 files; the original video, the compression exe, the decompression exe and the decompression data file. The compression would apply a (fairly basic) compression algorithm which was more or less of the type 'replace a pattern of x bytes by 1 byte'. That mapping was written to the decompression data file and repeated until the compressed video was very small; however, the decompression data file would be very large (similar, obviously, to the video(s) sizes together). His secret computer had a storage with the decompression data file and the idea was that he would, in time, find the ideal decompression data file (the Golden mapping or some such) that would be small-ish and yet would be able to compress 1000s+ of videos very efficiently. Which indeed would be enough, but it's not possible of course. To be clear; they believed they could ultimately have one few mb data file but with videos of 64kb by re-applying the encoding, hitting of some magic bag of mappings that would be found always repeating in very large files, thus making the compressed file smaller and smaller and smaller.

I don't know really how far he really got with this and no-body knows or ever will know. I would wager that IF they (the investors, people the investors hired etc) found that floppy disk, they would make it disappear due to the enormous embarrassment if that would leak out.


I am no specialist -- and although I understand and agree with the impossibility being referred to -- somehow it seems to me that AI models are "kind of" getting "closer" to this "golden decompression data file". Although AI models are not that huge, from a tiny human input (the "compressed data") they manage to "decompress" to data of mind-blowing quality, highly detailed and in amazing variety, while staying extremely coherent. These results are "inexact", sure (being exact is the aforementioned impossibility), but to the human perception they seem "perfect", which is good enough (for movies and other arts).


Yes, but the Sloot method was supposed to be loss-less. When we talk lossy, it gets trickier because then the definition of the expectation with % of loss and error % should be defined. I am sure we'll have AI's that can produce Terminator-ish in a bit; the thing is, it will be similar to you reminding the movie; it will be similar for the bigger plot, but a lot of details will be completely off/wrong. That's not the type of compression/encoding mr Sloot was talking about.

Edit: encryption was supposed to be encoding/compression.


> encryption

I think you meant compression?


Yeah, I corrected :] They called it Digital Coding System. I don't remember who 'they' was though.


By your definition the script and a list of actors should be counted as compression, but that's clearly not what this particular invention claimed to do. An AI model is more like a drawing-by-the-numbers game than a compression method. It creates something that looks superficially like the original but isn't the original.


Any "compression" mechanism that apparently violates Shanon's theorems would be "lossy" anyway, and lossy compression is essentially creating something that looks superficially like the original but isn't.

A script and a list of actors would take up 8k already (if not more), so yeah, an AI that can work on the prompt "take this script and make it like a Hollywood blockbuster" might be our best way to attempt to recreate this "compression" system with SoTA tech.


Sloot claimed his method was lossless, and it supposedly started out from a digital representation (without compression artifacts such as introduced by DCT or FT).


You are extremely close to having it all figured out. My then friend Hugo Krop[1] realized that something didn't add up but also missed the required background which is where I was brought in. I figured out how the demo was rigged and told him, that was the end of that. Interestingly: Pieper did go for it, and Pieper wasn't exactly dumb himself. I never really got that bit, he must have realized it was a scam. The demo was held in a building on the Sarphatikade in Amsterdam.

[1] Of 'TextLite' fame, deceased, very colorful, and later on a scammer in his own right.


> Sarphatikade

Yeah that demo was somewhat legendary back then. But how you figured he did the demo? Because although I met Sloot, he never demo'd it to me and I never saw a live demo (not on video either; why are there no videos; Pieper took the machine somewhere once); his friend said he did demo it to him and he also told him, over time, how it worked, more or less. I remember him saying all the time that Sloot (and now this guy) talked about infinite compression like it was the most trivial thing in the world so I don't suppose they actually thought that was any secret.

What I find very strange about the Pieper part (who I also met through a company (client) he advised with that investor vehicle he had of which I don't think any companies made it) is not that he fell for it; unlike what others say, he didn't appear very clever to me, at least not in anything tech, maybe business, although... He seemed like a blaaskaak when I met him; arrogant as hell and not much substance, but maybe that was his spiel for the ceo of the company he invested in. Anyway; what I find strange is not he fell for it but that his Philips tech colleagues, who saw the 'invention' multiple times, didn't have the same feel as you and Krop? And then warned him and said 'you must be insane to believe this boss' , or something. Not like Dutch people would hold back even if he was the boss.


Jan left a wife and four kids behind and I think Jan was effectively not the engine behind the scam, so I'm not going to put any of the rest of the story online. But if you want we can take this offline, email in profile.

As for Pieper: there is a reason why his investment vehicle (I take it you are referring to Twinning but there were others as well) did not do well.


Yes, it was Twinning indeed. Blast from the past that one.


Gah you really made me go on memory lane there, I've been thinking all day about all those people and what happened to them. Quite a few of them have died, some did really well, some went to jail, some have evaporated into thin air. It's a kaleidoscope.

I've been trying to place the exact date of the demo and I suspect it was one of the first he ever did to 'outsiders'.


The little story about the alien at the beginning was interesting. It might be a way of rephrasing information theory as a limit on “measure-ability”.

I remember in high school physics realizing that an in-elastic rod would not be possible, because it would allow faster than light communication. There’s probably something to that effect that already exists that I don’t know about regarding information theory, and how you can’t store more information than is allowed by converting the problem into one of measurement rather than compression.

It might even say something about how small matter is allowed to be.

If you increase the number of sticks the alien is allowed to have, then his task becomes significantly easier. So the question could be rephrased as “what are the fewest sticks the alien could use to complete his task of encoding n bits representing m books”.

Fewer sticks than that would violate this law of measurement (I don’t know if that actually exists but it seems like it) and more than that is wasteful.

At any rate, for each bit of information you’re required to measure with one order of magnitude of precision better, so it’s clearly impossible.


You're on the right track, especially with the last sentence.

If reality had an infinite amount of detail (i.e. matter could be arbitrarily small), we could make storage media as dense as we liked by encoding ones and zeros as the presence or absence of tiny bits of matter.

The alien's stick is a version of this, albeit an exponentially inefficient one restricted to codewords of the form 1111...0000.

In practice, if atoms are about N orders of magnitude smaller than macroscopic objects, we can fit (very roughly) 10^N bits of information in an object, and the alien's method can only fit, as you said, roughly N bits.

Of course, existing storage methods are somewhere in between, because 1 gram of storage media can hold way more than 23 bits but way less than 10^23.

(I'm handwaving past some important distinctions, like the distinction between the size of atoms and the level of detail in the physical world. In classical Newtonian physics, things can be made of particles but the particles can have perfectly continuous positions, so that there's still no ultimate limit on measurement detail. Quantum physics changes this -- although this gets complicated because of the holographic principle; many physicists think the ultimate information limit grows like the 2/3 power of volume, instead of linearly...)


> many physicists think the ultimate information limit grows like the 2/3 power of volume, instead of linearly

I may have misunderstood and I'm clearly not up to speed on the literature, but even at an intuitive level, wouldn't this violate other principles?

If this is how information entropy scaled, either we could "work around" it by having more but smaller storage entities adding to the same volume, which violates this theory directly because then when would it ever actually apply; or it somehow enforces the limit over any given volume regardless, and therefore the entire universe's volume (which isn't even finite?) somehow sets and tracks a global limit because anything smaller would be a workaround. Neither makes sense to me.

Now if we're saying that the simulation running our universe has limitations that make this true in practice in some sense that we can measure but can't work around, I will need an explanation as to why we're not totally freaking out right now. There's navel gazing philosophy and then there's shit like this which could mean we discover a wrongwarp to the end credits within this century.


Interestingly, according to Wikipedia¹, Pieper was not a professor of CS, as described in the article; instead, he taught "business administration and corporate governance", which can be compatible with his lack of understanding of the topic (nonetheless, this is a giant gap for somebody with a degree in CS).

¹ https://en.wikipedia.org/wiki/Roel_Pieper#Career_in_the_Neth...


That reads just like they're talking about a generative AI model.


Not at that point in time.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: