Hacker News new | past | comments | ask | show | jobs | submit login
Jacob Ziv has died (twitter.com/erlichya)
710 points by tkhattra on March 26, 2023 | hide | past | favorite | 87 comments



Sad news. Just recently Abraham Lempel [0] died, just days after I tasked my 16 yo daughter to implement an lzw compression as a part of her programming education. I chose this particular task for her because many years ago I have implemented an lzw compression myself, years ago, learning a lot about bits, masks and gifs in the process. R.I.P.

[0]: https://en.m.wikipedia.org/wiki/Abraham_Lempel


> I tasked my 16 yo daughter to implement an lzw compression as a part of her programming education

Kudos, to the both of you.

I'm sad we're starting to rapidly lose so many of our titans.


In other industries, the death of the inventor of a technology often marks the start of a further wave of innovation.

For all kinds of reasons, the inventor (who is sometimes by now rich and/or powerful), may have a specific vision or path they want to proceed down. When they die, other people start investigating other paths, often with more success.

I wonder if tech will see the same?


No. In tech when the creator quits or dies they’re replaced by MBAs that squeeze every penny out of the creation. (Like authors whose grandchildren direct the trusts which own the copyright a century after the work was written)

New creations in tech have to wait for patents to expire and the incumbents to crumble to the point where they don’t have the cash to buy out the competition any more.


A bit tone deaf considering the thread is about algorithms that are open.


LZW was claimed by Unisys for many years. It wasn't Ziv's fault, it wasn't even Welch's, but it does highlight that compression has historically been a garbage fire of patent harassment. I certainly think it has held the field back.

The patent harassment might even be a factor in the curiosity mentioned elsewhere in this thread, that a lot of practical progress in file compression has been published in a Russian web forum.


Well, plenty of compression algorithms have been patented or attempted to be patented - e.g. arithmetic codes and Microsoft's rANS patent


I personally wrote an implementation of lzw in Assembly in my late teens. It could deal with gifs... that was before 1999, when

Unisys[0] claimed the patent (for all kind of uses) and had the patent for many a year, PNG was created not to use lzw since it was patented. The algorithm was patented for the encoding part but not decoding.

[0]: https://eng.libretexts.org/Bookshelves/Electrical_Engineerin...


Plenty of things are totally open, yet innovation is still directed by the inventor.


Still not at all relevant to this case. Algorithms have no such gate keeping because there is no barrier to entry. Algo developers aren’t fighting for some tiny pot of NSF money allocated to particle physics to build a 100 million dollar algorithm detector.


This is only partially true. A great example is Arithmetic coding in compression where research pretty much stalled for almost 40 years (from the mid 1970s until the 2010s) due to patent concerns.


To be fair, Ayn Rand said basically the same thing about industry long before tech bros existed.


I noticed your username after reading your comment, and was not surprised.

For some reason, it seems the former USSR is highly prominent in compression algorithms. RAR, PAQ, encode.su, etc. and of course Andrey Markov.


Incidentally, the guy who wrote RAR (and an incredibly great file manager for Windows, FAR Manager), Eugene Roshal, has studied at the very same faculty/specialty as I did. However, he had graduated before I entered the university.


> For some reason, it seems the former USSR is highly prominent in compression algorithms.

I’d take a wild guess that it has to do with making more with having less, such as access to larger storage volumes or broadband.

Specific to compression, when you’re “tech poor”, the time you spend waiting for data to get processed is essentially free.


At the time those algorithms where developed even computers / networks in the West were painfully slow, compression always had its use.

zstd is developed at Facebook, not known for a lack of computing resources.


Still, even if you can reduce data footprint on Facebook by 1%, that's the GDP of a small planet right there.


I could take a wild guess as to your last statement. From a little bit that I've read, maths in the USSR were taught differently than in the west. I remembering reading somewhere that they were teaching kids discrete math in elementary/middle school. A topic that is big in CompSci and also a topic I didn't know anything about until my sophomore year at university. If that is true, I would say this may be why, but that is just speculation.


> the former USSR is highly prominent in compression algorithms

And, which is quite odd, a lot of proprietary Windows software.


If you're daughter can knock out lzw at 16, she's going to be an incredible programmer.

How do you go about teaching your kids? I don't have kids yet, but if they do have in interest in programming, I'd love to give them a strong head start like you have.


Thanks, I think she has a potential to become a great developer, unless we all aren't replaced by that chatGPT thing everyone's talking about.

Regarding teaching, I understood early that while I can program, I have very little understanding on how to teach it. So first, she underwent two Python courses (basic and advanced) on stepik.org [0], which had automated tests and have her an understanding of the language.

Then, I planned to start throwing tasks of increasing complexity at her until she solves them, giving her a more or less complete coverage of all things programming. So far, she has written such things as

- Tetris, in the console, with keyboard input

- Snake game, with competing snakes which use wave algorithm to find path

- LZW algorithm to learn bits and compression and other things

Next my plan for her follows roughly this path:

- some GUI programming, using GTK and Windows Forms. Calculator or maybe Checkers or something.

- some web programming, backend and frontend. Most likely Django / jQuery

- maybe some neural networks since are all so hot now

- some hardcore assembler stuff, like maybe some simple game or visual demo

- some mobile development, Android or iOS

Once she'll be done with all that, I think she'll have quite a clear picture of most general areas of development and will be free to pursue the area she liked the most.

[0]: https://stepik.org


some hardcore assembler stuff, like maybe some simple or visual demo

Demoscene/sizecoding might be interesting, especially if she also likes artistic stuff.


Would you mind sharing:

1) sequence in which you gave her tutorials e.g. first tetris, then snake game, etc.

2) sources that you used to guide her through these tasks? e.g. youtube videos or other links.

Thanks.


First I have her the basic Python course, [0]

Then more advanced Python course [1]

Then we went for Tetris. I did provide her help in getting the initial loop running where you press the key and something happens, then she implemented the game logic by herself. I did ask her to explain the architecture to me in broad strokes prior to development, and gave her some feedback, so the architecture was adjusted a bit.

First version was built in a very direct way, so once it was done I told her about Object Oriented Programming and she rewrote Tetris using objects in a more elegant way.

Then she moved over to Snake game, and mostly did it by herself, reusing parts of the code that were used in Tetris (main game loop thing, mostly), I mostly provided feedback and beta testing. We had some interesting moments debugging the wave algorithm which computer player snakes used to find a target (a simple wave algorithm), and improving performance. Then, we moved on to this LZW thing and, unfortunately, its creators immediately started dying.

The reasoning why I chose these particular tasks is because they are all relatively limited in scope, so a studying task is not enormous in size so you never finish it and actually have a chance to ship a finished product to show it to friends/teachers.

[0]: https://stepik.org/course/67/promo

[1]: https://stepik.org/course/512/promo


Thanks for sharing.

I followed this teaching path: scratch > python > sqlite > javascript (web). I am missing algorithms and OOP, and will look for gradual learning resources in these area.

One recent area I tried was observability. With prometheus and grafana, they got into collecting more data points and creating visualizations for them. Bangle.js is another one.


And it seems to me, at least factoring in the upcoming projects, to cover different types of work, not merely progressive difficulty. So she'll get a taste of games, a taste of systems, a taste of web ui, a taste of embedded, etc. It sounds great.


> using GTK

Maybe Qt instead?

Suggestion that because the Qt docs tend to be much, much better + it's realistically cross platform, thus opens up more possibilities.


Maybe, that's not the point. I'd rather do a GTK because back in 2004 or so I had written some software that used GTK so I'm more familiar with it - back then QT still had those licensing issues so everyone who wrote non-free software considered it to be "good, but that license ..." .


I assumed the point was to get a taste of desktop ui app development along with a taste of 20 other kinds of developmemt, not to learn gtk or qt or any particular framework.


fantastic list! although obviously no such plan can really cover the whole space, i cant help but want to suggest adding some languages representation to the list; a simple interpreter or compiler project, or alternatively some symbolic computation eg algebra or differential calculus. being exposed to this stuff via Racket was hugely transformative for me (e.g. https://beautifulracket.com/stacker/why-make-languages.html), but if lisps arent your thing theres plenty of great projects in e.g. java as well: https://craftinginterpreters.com/contents.html


yes, the idea is to do these tasks using a variety of languages, not just Python. With this LZW task, for example, she has already created a compression and decompression in Python and is now working on doing it in C so that it can take a file compressed by a python script and decompress it back with C program.

Regarding Racket, it looks interesting. While I never did anything like it myself back in the day, it looks like it'll be an interesting task for my daughter further down the road.


> unless we all aren't replaced by that chatGPT thing everyone's talking about.

Yes. Bravo to you and your daughter for your perseverance in spite of the looming "disruption" coming for all human knowledge workers.


You are awesome and your daughter is awesomer


I hope that an LZW compressor was something she would have naturally wanted to hack on herself, rather than something she’s only coding because she was “tasked” to.

The most surefire way to kill a young hacker’s spirit is to force them to hack on things they don’t care about. This goes with anything, not just computing. To whit, a childhood friend of mine was a gifted pianist who really loved jazz. His parents forced him to study with a classical teacher. Guess who doesn’t play piano anymore?


Well, she seems to be interested in programming and I don't force it on her. So far I think she does quite good. I have chosen lzw for her since my strategy is to give her quite a few diverse tasks to give her broad (albeit not very deep) understanding of general areas of development. I have covered this topic in more detail in a sibling comment here https://news.ycombinator.com/item?id=35318995


another anecdote to balance :

as a kid who played chess, the kids who were 'forced' to play chess -- the ones who came from a long chess playing family and had different scenarios drilled into their head from the day they could talk -- were much more talented at chess than I.

Their talent persuaded them to continue pursuing it far longer than I ever did, with many of them still involved.

The most involvement I have now is a friendly lichess game every few months.

It was their talent that allowed them to find more enjoyment within the pursuit than I ever did, and that talent was essentially forced by their immediate family.

So, essentially, I am just saying that forced persuasion thing that parents' sometimes do is a double-edged sword; it can create animosity and bitter hatred but it can also create the drive and motivation for boundless talent and personal satisfaction.


I’m pretty good at computers but had no real guidance. I now hire young people with a genuine interest, then gift them equipment and guidance that will last them their entire lifetime (Even if they get a career in something else).

There’s such thing as pushing too hard, but nobody got buff without being pushed by those around them


> nobody got buff without being pushed by those around them

This is completely contrary to my experience. The successes I’ve had were my own motivations and drive.


This made me a little sad. I remember my parents pushing water colors, carving sets, parks and rec, trumpets, books, computers, you name it in front of me to help me find something I was interested in. They didn't force any particular thing on me, but definitely exposed me to many things to help me find my path. Then they supported my interest in ways that boggle my mind looking back. There is no way they could afford to provide me many of the experiences I had.

I sincerely hope that you were just ignorant of the help you received along the way. A child with absolutely no support or encouragement is a very depressing thought.


I think you are misinterpreting "push". In the comment you replied to, it is (I believe) being used in the sense of "pressure, force, compel", rather than "provide with opportunity". My reading of TedDoesntTalk's comment does not indicate that they were provided with no support or opportunity - merely that the motivation to persist with those opportunities, once they ceased being easy fun distractions and required dedication for improvement, arose internally.


Precisely. Thank you.


You had a very priviledged childhood then. The only response to my interests I got from my parents was "you are already good at school why are you straining yourself doing these unnecessary things".

And I did not have bad parents. Many actually hinder their childrens development, instead of being ambivalent to it.


The only reason I got into development (and have a successful career) today is because I was able to work on what I wanted to prior to starting said career. If a parent had 'tasked' me with anything, it would have turned me off. Not because I'm disobedient or anything, but because I have disabilities that don't agree with that method of learning.


For the record, 'tasked' is probably not the best word to describe for what exactly happened (English is not my native language). In fact, what happened was this:

- Dad, I'm so done with this Snake game, give me some other task?

- Ok, how about making that LZW thing we talked earlier?

- Cool.


Can you explain more? My whole life I felt the same. If I have to do something, I just hate it and can't move forward. But if is my choice Im really good a it.


Heh Heh Heh

Isn't the general term for that a "problem with authority"? ;)


The most extreme example of this is the Polgár sisters:

https://en.m.wikipedia.org/wiki/Judit_Polgár

Judit Polgár (born 23 July 1976) is a Hungarian chess grandmaster, widely regarded as the strongest female chess player of all time.

Polgár and her two older sisters, Grandmaster Susan and International Master Sofia, were part of an educational experiment carried out by their father, László Polgár, in an attempt to prove that children could make exceptional achievements if trained in a specialist subject from a very early age. "Geniuses are made, not born," was László's thesis. He and his wife Klára educated their three daughters at home, with chess as the specialist subject.


Isn't this just assuming the "forced" students didn't in fact want to keep playing?


I don't think it works that way.

I picked up skills after my parents because I was curious and they were both knowledgeable in their respective fields(my dad, a medical equipment engineer, could repair a TV, which for a five year old amounted to a superpower).

I was never forced to do anything and yet had the exact same headstart you're talking about, but in e.g. school math.

Of course that did a lot of damage to my work ethic, but the point is that kids are naturally curious and, at least initially, want to do what their parents do.


I get where you are coming from and yet when I was introduced to LZW I wasn't that interested and yet by the end it was the coolest thing I had ever seen. By having control over the entire input and output stream you can embed within that the very properties you are trying to achieve. That was a game changer.


All the people with computer science degrees working in the field who also had to take common core courses that they 'didn't care about' seem to serve as a counterexample to this general claim.


There’s a local jazz musician named… Jaz. His father was a big fan of the genre and it stuck.

I’ve seen him play at SF Jazz a few times and it seems like his loves his job.


RIP. I remember when I was a young guy learning programming, reading the source code of various DOS based LZ, LZH or LZW compression utilities and implementing myself a LZH algorithm thinking it will bring me great fame and fortune. Lempel, Ziv and Eugene Roshall were my heroes.

At that time many young programmers regarded compression, compiler writing and anti-virus writing as very important. Now people just learn React. :)


Compression and compiler writing are just as important today as they were back then. And the closer to the real end of cheap hardware improvements we get the more important they will be. They are where real progress lies, the rest was just a temporary freebie.

Take cars: cars have a reasonable upper limit in physics so low that the difference between 'mild' (100 kph on the highway) and insane (300 kph+, no matter where) is a mere factor of three. And it took a good 100 years before that barrier was broken. It won't progress much beyond the standard of 100 or so kph under normal circumstances. There are just too many things that work against you when you go much faster than that including energy consumption, braking distance, the willingness of your vehicle to stay on the ground, attention requirements (ok, AI... someday...) and so on. So even if it is possible in exceptional cases it likely won't be the norm, even on closed over and evacuated freeways.

Aircraft did a little better, but even there the supersonic passenger jet came and went. Though it may be back one day.

In comparison computers have gotten many orders of magnitude faster in a much shorter time. But there is a physical limit, and we are very likely to run into that soon. And once that happens the only way to progress is to make more efficient use of the resources you've already got.


I don’t agree. Aircraft had their growth phase earlier, first flight in 1903 to landing on the moon in 1969. Airliners look like they did in the 60s because there’s nothing left to improve on with regards to efficiency. How many orders of magnitude are there between the wright flier and a 747?

We’re reaching the diminishing returns portion of the logistic curve for computing with regards to both things like hardware and compiling, compression, etc. It’s just hard to compare industries which are done growing with ones that still feel like they might be exponential forever.


> It’s just hard to compare industries which are done growing with ones that still feel like they might be exponential forever.

Some industries never evolved too much. I don't see a big functional difference between cars a 100 years ago and cars now. And there were electric cars in 1900s.

A car is constraint by lots of things just like GP comment affirmed. Putting more gadgets in a car won't change what a car is or how it works at a fundamental level.

From the very first microprocessors to today's there is a big leap. If we apply that leap to cars they should be using nuclear fusion and take us to moon and back in seconds. And it's not like trillions of dollars weren't invested in car R&D in more than a century.

Software growth seems exponential only because of hardware and the Moore's Law. Which will come to an end soon.

Thanks to Moore's Law, we programmers, afforded to be careless, not think about performance or use unnecessary abstractions just out of esthetical or ideologic considerations. It allowed businesses to say that time to market trumps everything and customers should simply buy more powerful boxes or more boxes. Now this will come to an end soon and we will have to think about efficiency, speed and performance.

Or maybe chips will move from silicon to some alternative processes where growth is still possible.


You are making my point stronger rather than weaker: Aircraft have only become more efficient in the last 50 years or so and that's precisely the equivalent of better compilers and compression.

And the diminishing returns for computing are for the most part driven by the lack of investment in new ways of computing because we got so much for free. But once the free ride is over you can expect the research into efficiency to be picked up again because that will be the only field where real progress can be made. There have been a few half-assed (apologies to those involved, it was no doubt a ton of work) attempts at making computing fabrics and clockless machinery. And that's because by the time you have something working the scaling advantages have already overtaken your work and absolutely nobody is going to give up a working architecture for something unproven if it doesn't given an immediate return. Some pretty good ideas died like that.

But I suspect their day will come.


> We’re reaching the diminishing returns

Yes, for digital computing. But computing is on the cusp of becoming more non-determistic, which will allow analog computing to absolutely demolish digital computing in performance.


> In comparison computers have gotten many orders of magnitude faster in a much shorter time. But there is a physical limit, and we are very likely to run into that soon. And once that happens the only way to progress is to make more efficient use of the resources you've already got.

Good points. And I will even argue that while we didn't hit a physical limit in hardware yet, it's still important to have a reasonable performance because not having performance means resources are wasted, money are wasted and it might lead to user frustration.

But it seems to not be the majority's opinion or at least not most of the business owners opinion.


Unfortunately until crappy and slow performing software starts losing business to well-implemented alternatives, it will never change. The current PM model is purely based on shipping fast, and any thought to performance is immediately dismissed as "pre-mature optimization." I've been consulting for companies that had to rolling reboot their entire cluster of app servers every couple of hours because a host of memory leaks (that also added O(n) runtime) would slowly exhaust resources. Rather than have someone fix the problem, they chose to invest in devops to automate the rolling restarts. This is not all that unusual in my experience, because customers will tolerate very sluggish response times so it's not worth fixing to the business.


Entirely my experience. I worked for a very large company and since our microservices were running in Kubernetes it was totally OK for the services to crash. We just logged the things in elastic stack, used data dog to monitor, used some SRE to restart services when things were messy.

I've lost days to solve a memory bug in one of the services and no one cared.

The original architect and programmers were gone, the engineering managers were pushing Clean, SOLID and design patterns hopping that will help and the domain was hard because we had to deal with real money while respecting tens of different laws and locales.

We tried at our best to do the impossible, we were understaffed by a factor of 5X.

And the services were crashing like crazy but it looked like a no downtime because we provisioned another POD in Kubernetes.

Provisioning was like: how many pods do we need? 3? Let's make it nine. How much memory do we need? Half a gig? Let's make it 4 gig to be sure.

So it was a mess and it worked and that mess continues to work somehow.

Not sure if mess driving engineering is a sound bussineses idea, though.


Erlang has taken 'let it crash' to entirely new heights, but that's from a top level viewpoint, components are allowed to crash but the service isn't. And you're still supposed to figure out why your service crashed but I'm pretty sure not everybody does that.

So as long as your supervision mechanism is bulletproof (supervision trees) then you can get away with this for very long.


>At that time many young programmers regarded compression […] as very important.

They still do regard it as important, just in a different sense. Shannon formally proved that compression is congruent to statistical inference — an efficient compression algorithm seeks to minimize the statistical entropy of the scheme used to encode a message. Compression is thus loosely congruent to a branch of statistical inference you may have heard a thing or two about called "artificial intelligence" [0], which many young programmers are very much engaged with.

Take the Hutter Prize [1], which awards money for the ability to compress the entire English Wikipedia, or the Large Text Compression Benchmark [2], the latter of which's current top performer is a transformer-based neural network [3]. As the LTCP's rationale says [0],

>Given a probability distribution P over strings s, the Shannon capacity theorem states that the optimal (shortest average) code for s has length log 1/P(s) bits. If the distribution P is known, then such codes are in fact easy to find and compute. The problem is that for natural language, we do not know how to explicitly compute the model P(s) for a given string s.

>However humans must implicitly know P(s) because our speech and writing, by definition, follows this distribution. Now suppose that a compute can compute P(s) for all s. Then for any question q (or dialog ending in a question) and any answer a, the machine can compute P(a|q) = P(q,a)/P(q), and use this distribution to answer any question q posed by the judge. By definition, this distribution would be identical to the distribution of answers given by the human, and the two would be indistinguishable.

[0] http://www.mattmahoney.net/dc/rationale.html

[1] http://prize.hutter1.net/

[2] http://www.mattmahoney.net/dc/text.html

[3] https://bellard.org/nncp/


While compression is congruent with some cases of statistical inference, no one is trying to do statistical inference using compression. Even the example you gave of Fabrice Bellard (which I highly admire) does the converse and uses neural networks to compress.

Even for realizing what you've said one needs a bit of theory, basic computer science and some math. And most people today have a "do" philosophy and are result oriented. They are not learn oriented, they hate CS and math and they hate low level programming.

I don't think compression is highly relevant today, but at the time I started learning it was a good tool to sharpen one's mind. Learn algorithms, learn about memory management, learn about hardware, learn low level constructs, learn a bit of math.

Whenever a young person wanting to undertake a programmer career is asking me for advice I'll ask what their motivation is. If they just want money I advice them to learn the tools in fashion today, maybe Javascript and React, maybe Python, maybe C#. But if they respond they want to do it for both enjoyment and money, I put them on the hard path. I ask they enroll an BS CS program, or if that is not possible, give them long lists of books, tutorials, courses which cover CS basics, math used in CS basics, hardware, low level programming and the programming hot topics du jour. In that last case I also design their learning paths to accommodate their goals and I am always taking time to answer them questions in the future if I can, if not I will at least indicate where should they look or which is the right person to ask.


> no one is trying to do statistical inference using compression

Well, maybe no one sensible? I still think it's quite cool that you can take any general purpose compression algorithm, and abuse it to do e.g. image classification. (Just append the image you want to classify to each of the classes' sets in turn, and see which compresses best!)

And actually I do remember a paper that tried to use ideas from PAQ to do online learning. Gated Linear Networks, out of DeepMind, in 2019:

https://arxiv.org/abs/1910.01526

All compression is related to AI, but especially sample efficient online learning is basically what data compression is all about.


Well stochastic processes are a much wider category than mere algebraic operations applied on group theory that losseles data compression use.

I'd think that calculus or functional theory or category theory can find more bijections towards statistics or even congruences than mere arithmetics or algebra ever will. Ok, you can explain or derive any mathematics construct using only algebra, and there were efforts to do so, but does it makes sense?


LZ77 and LZ78 are ubiquitous, forming the basis of compression schemes used in ZIP, PNG, GIF, and Yann Collet's Zstandard.

[0] https://en.wikipedia.org/wiki/LZ77_and_LZ78


HN served this webpage to me using `content-encoding: gzip`, which is using LZ77 (combined with huffman coding) if you look far enough under the hood.


In an interview, Jacob Ziv said that it was the uncomputability of Kolmogorov complexity that forced him to think about a different way to define the complexity of a single finite string : this is what led to Lempel-Ziv complexity in 1976, and its famous consequences, the LZ77 and the ZL78 compressors.

https://web.archive.org/web/20181123231027/http://backup.its...


It is fitting that such a large name is expressed in so few characters. RIP.


It's even one fewer in his native Hebrew: "יעקב זיו"


Two Professors Emeritus from my alam mater, the Technion - one from EE, the other from CS (Prof. Ziv). I distinctly remember noticing them in the old group photos of the first few graduating cohorts of the B.Sc. in CS.

In Hebrew (or rather Aramaeic) we say:

חבל על דאבדין ולא משתכחין

A pity due to those who absent and not present.


From wikipedia:

  Jacob *Z*iv (Hebrew: 27 November 1931 – 25 March 2023) 
  was an Israeli electrical engineer who, along with Abraham 
  *L*empel, developed the *LZ* family of lossless data compression 
  algorithms.


RIP

With better compression, heaven will soon have more space available.


To everyone who is wondering, there is a black bar its just one pixel.


I love this :)


Paying my respects to a pioneer.

I started a project to implement a LZ77 decoder in Game Boy assembly to see if it could compress the sprites in Pokémon Red and Blue versions better than the algorithms actually used in the game. Results are inconclusive so far, but it's been an enlightening experience.


May his memory be a blessing.


I remember my first LZW implementation. It was so amusing to me the the one case where something didn't exist in the dictionary yet, you knew what it was and could add it and keep going!


The famous k<w>k<w>k case... indeed.

I think, though, that most instruction texts do the reader a disservice by starting with LZW (and thus having to deal with the kwkwk case from the get go). They should start with LZ78, which is much more simple and elegant; then show how -- although unimportant asymptotically -- we can save some bits by using Welch's trick (W in LZW), and show how that gives rise to the special case which needs to be handled.

Somewhat unexpectedly (for me anyway), if you write the decompressor as a memory stream decompressor rather than as a dictionary decompressor -- that is, instead of putting <w>,k pairs into your decoder dictionary while decoding, you note the output <offset> of where that <w> first appeared - there is no special case, and the implementation goes back to being LZ78 elegant and uniform.


I’m a bit surprised that neither Lempel nor Ziv were worthy of a black bar here.


RIP. I just got done implementing a modified version of LZSS which is crucial to fitting all the music into a game I'm writing. Respect.


Lempel, Moore, Ziv... This year is very unlucky


RIP.

This is worthy of an HN black bar.


I'd bet that an email to hn@ would fix it; I don't believe the black bar is programmatic and thus maybe dang hasn't seen this yet


Rest in peace! <3


Oh no!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: