Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment (or comment tree), please tell us at hn@ycombinator.com so we can add it here.

Co-founder of Quickwit here. Seeing our acquisition by Datadog on the HN front page feels like a truly full-circle moment.

HN has been interwoven with Quickwit's journey from the very beginning. Looking back, it's striking to see how our progress is literally chronicled in our HN front-page posts:

- Searching the web for under $1000/month [0]

- A Rust optimization story [1]

- Decentralized cluster membership in Rust [2]

- Filtering a vector with SIMD instructions (AVX-2 and AVX-512) [3]

- Efficient indexing with Quickwit Rust actor framework [4]

- A compressed indexable bitset [5]

- Show HN: Quickwit – OSS Alternative to Elasticsearch, Splunk, Datadog [6]

- Quickwit 0.8: Indexing and Search at Petabyte Scale [7]

- Tantivy – full-text search engine library inspired by Apache Lucene [8]

- Binance built a 100PB log service with Quickwit [9]

- Datadog acquires Quickwit [10]

Each of these front-page appearances was a milestone for us. We put our hearts into writing those engineering articles, hoping to contribute something valuable to our community.

I'm convinced HN played a key role in Quickwit's success by providing visibility, positive feedback, critical comments, and leads that contacted us directly after a front-page post. This community's authenticity and passion for technology are unparalleled. And we're incredibly grateful for this.

Thank you all :)

[0] https://news.ycombinator.com/item?id=27074481

[1] https://news.ycombinator.com/item?id=28955461

[2] https://news.ycombinator.com/item?id=31190586

[3] https://news.ycombinator.com/item?id=32674040

[4] https://news.ycombinator.com/item?id=35785421

[5] https://news.ycombinator.com/item?id=36519467

[6] https://news.ycombinator.com/item?id=38902042

[7] https://news.ycombinator.com/item?id=39756367

[8] https://news.ycombinator.com/item?id=40492834

[9] https://news.ycombinator.com/item?id=40935701

[10] https://news.ycombinator.com/item?id=42648043


I worked for Peter Kirstein for many years - he always had wonderful stories to tell.

In the article Peter talks about the temporary import license for the original ARPAnet equipment. The delayed VAT and duty bill for this gear prevented anyone else taking over the UK internet in the early days because the bill would have then become due. But he didn't mention that eventually if the original ARPAnet equipment was ever scrapped, the bill would also become due.

When I was first at UCL in the mid 1980s until well into the 90s, all that equipment was stored disused in the mens toilets in the basement. Eventually Peter decided someone had to do something about it, but he couldn't afford the budget to ship all this gear back to the US. Peter always seemed to delight in finding loopholes, so he pulled some strings. Peter was always very well connected - UCL even ran the .int and nato.int domains for a long time. So, at some point someone from UCL drove a truck full of obsolete ARPAnet gear to some American Air Force base in East Anglia that was technically US territory. Someone from the US air force gave them a receipt, and the gear was officially exported. And there it was left, in the US Air Force garbage. Shame it didn't end up in a museum, but that would have required paying the VAT bill.


That March 1977 map always brings back a flood of memories to this old-timer.

Happy nights spent hacking in the Harvard graduate computer center next to the PDP-1/PDP-10 (Harv-1, Harv-10), getting calls on the IMP phone in the middle of the night from the BBN network operations asking me to reboot it manually as it had gotten wedged...

And, next to me, Bill Gates writing his first assembler/linker/simulator for the Altair 8080... (I tried talking him out of this microcomputer distraction -- we have the whole world of mainframes at our fingertips! -- without success.)

(Edit:) We also would play the game of telnet-till-you-die, going from machine to machine around the world (no passwords on guest accounts in the early days), until the connection died somewhere along the way.

Plus, once the hackers came along, Geoff Steckel (systems guy on the PDP-10) wrote a little logger to record all incoming guests keystrokes on an old teletype, so we could watch them attempting to hack the system.


Last Saturday, myself and two other firefighters managed to find a woman lost in a maze of 40+ miles of trails. Her hip had dislocated, she could not move. The temperatures were in the upper 20s (F) (-3C or so) with serious windchill amidst 35mph/56kph wind gusts. It was extremely dark. We stayed with her and tried to keep her as warm as possible until a UTV arrived to extricate her. I was home by 01:00, after 4hrs outside under the stars.

In the end we didn't really do much at all, but it felt like one of the most meaningful nights of my entire life.


My twitter account wasn't big, but it was non-trivial (~30K followers). A post could usually get me to experts on most topics, find people to hang out with in most countries, etc. There were many benefits, so deleting was very hard.

But it was eating my brain. I found myself mostly having tweet-shaped thoughts, there was an irresistible compulsion to check mentions 100 times a day, I somehow felt excluded from all the "cool" parts which was making me miserable. But most importantly, I was completely audience captured. To continue growing the account I had to post more and more ridiculous things. Saying reasonable things doesn't get you anywhere on Twitter, so my brain was slowly trained to have, honestly, dumb thoughts to please the algorithm. It also did something to attention. Reading a book cover to cover became impossible.

There came a point when I decided I just don't want this anymore, but signing out didn't work-- it would always pull me back in. So I deleted my account. I can read books again and think again; it's plainly obvious to me now that I was very, very addicted.

Multiply this by millions of people, and it feels like a catastrophe. I think this stuff is probably very bad for the world, and it's almost certainly very bad for _you_. For anyone thinking about deleting social media accounts, I very strongly encourage you to do it. Have you been able to get consumed by a book in the past few years? And if not, is this _really_ the version of yourself you really want?


In 2005, my paper on breaking RSA by observing a single private-key operation from a different hyperthread sharing the same L1 cache -- literally the first publication of a cryptographic attack exploiting shared caches -- was rejected from the cryptology preprint archive on the grounds that "it was about CPU architecture, not cryptography". Rejection from journals is like rejection from VCs -- it happens all the time and often not for any good reason.

(That paper has now been cited 971 times according to Google Scholar, despite never appearing in a journal.)


The original answer to "why does FastMail use their own hardware" is that when I started the company in 1999 there weren't many options. I actually originally used a single bare metal server at Rackspace, which at that time was a small scrappy startup. IIRC it cost $70/month. There weren't really practical VPS or SaaS alternatives back then for what I needed.

Rob (the author of the linked article) joined a few months later, and when we got too big for our Rackspace server, we looked at the cost of buying something and doing colo instead. The biggest challenge was trying to convince a vendor to let me use my Australian credit card but ship the server to a US address (we decided to use NYI for colo, based in NY). It turned out that IBM were able to do that, so they got our business. Both IBM and NYI were great for handling remote hands and hardware issues, which obviously we couldn't do from Australia.

A little bit later Bron joined us, and he automated absolutely everything, so that we were able to just have NYI plug in a new machine and it would set itself up from scratch. This all just used regular Linux capabilities and simple open source tools, plus of course a whole lot of Perl.

As the fortunes of AWS et al rose and rose and rose, I kept looking at their pricing at features and kept wondering what I was missing. They seemed orders of magnitude more expensive for something that was more complex to manage and would have locked us into a specific vendor's tooling. But everyone seemed to be flocking to them.

To this day I still use bare metal servers for pretty much everything, and still love having the ability to use simple universally-applicable tools like plain Linux, Bash, Perl, Python, and SSH, to handle everything cheaply and reliably.

I've been doing some planning over the last couple of years on teaching a course on how to do all this, although I was worried that folks are too locked in to SaaS stuff -- but perhaps things are changing and there might be interest in that after all?...


Good times. I was the developer at Microsoft who designed the Xbox 360 hardware security, wrote all the boot loaders, and the hypervisor code.

Note to self: you should have added random delays before and after making the POST code visible on the external pins.


The key clarification is in one of the comments: if you want to treat partial derivatives like fractions, you need to carry the "constant with respect to foo" modifier along with both nominator and denominator.

Once you do that, it's clear that you can't cancel "dx at constant z" with "dx at constant y" etc. And then the remaining logic works out nicely (see thermodynamics for a perfect application of this).


In my province in Canada (Quebec) some IT worker in the government made a template .doc document that's widely used. They exported it with the document title "sdf fdsfdsfg". This title isn't in the actual printed document but it's in the metadata. If you Google that string in quotes you'll find tons of official documents with that title.

https://www.google.com/search?q=%22Sdf+fdsfsdfg%22


For the past year or so, I've been trying (off and on) to formalise part of my undergraduate complex analysis course in Lean. It has been instructional and rewarding, but also sometimes frustrating. I only recently was able to fully define polar form as a bijection from C* to (-pi,pi] x R, but that's because I insisted on defining the complex numbers, (power) series, exp and sin "from scratch", even though they're of course already in mathlib.

Many of my troubles probably come from the fact that I only have a BSc in maths and that I'm not very familiar with Lean/mathlib and don't have anyone guiding me (although I did ask some questions in the very helpful Zulip community). Many results in mathlib are stated in rather abstract ways and it can be hard to figure out how they relate to certain standard undergraduate theorems - or whether those are in mathlib at all. This certainly makes sense for the research maths community, but it was definitely a stumbling block for me (and probably would be one if Lean were used more in teaching - but this is something that could be sorted out given more time).

In terms of proof automation, I believe we're not there yet. There are too many things that are absolutely harder to prove than they should be (although I'm sure that there's also a lot of tricks that I'm just not aware of). My biggest gripe concerns casts, in "regular" mathematics, the real numbers are a subset of the complex numbers and so things that are true for all complex numbers are automatically true for all reals[0], but in Lean they're different types with an injective map / cast operation and there is a lot of back-and-forth conversion that has to be done and muddies the essence of the proof, especially when you have "stacks" of casts, e.g. a natural number cast to a real cast to a complex number etc.

Of course, this is somewhat specific to the subject, I imagine that in other areas, e.g. algebra, dealing with explicit maps is much more natural.

[0] This is technically only true for sentences without existential quantifiers.


I took that class with you! It was amazing. Here are my notes: https://wstein.org/books/ribet-stein/

This reminds me of a fun experience I had in grad school. I was working on writing some fast code to compute something I can no longer explain, to help my advisor in his computational approach to the Birch and Swinnerton-Dyer conjecture. I gave a talk at a number theory seminar a few towns over, and was asked if I was doing this in hopes of reinforcing the evidence behind the conjecture. I said with a grin, "well, no, I'd much rather find a counterexample." The crowd went wild; I've never made a group of experts so angry as that day.

Well, I never was much of a number theorist. I never did come to understand the basic definitions behind the BSD conjecture. Number theory is so old, so deep, that writing a PhD on the topic is the step one takes to become a novice. Where I say that I didn't understand the definitions, I certainly knew them and understood the notation. But there's a depth of intuition that I never arrived at. So the uproar of experts, angry that I had the audacity to hope for a counterexample, left me more curious than shaken: what do they see, that they cannot yet put words to?

I am delighted by these advances in formalism. It makes the field feel infinitely more approachable, as I was a programmer long before I called myself a mathematician, and programming is still my "native tongue." To the engineers despairing at this story, take it from me: this article shows that our anxiety at the perceived lack of formalism is justified, but we must remember that anxiety is a feeling -- and the proper response to that feeling is curiosity, not avoidance.


The obituaries are often my favourite part of the Economist. They are edited by Ann Wroe - https://en.wikipedia.org/wiki/Ann_Wroe - a true legend.

They focus mostly on long-forgotten people and create an intense glimpse into the short timeframe when their life made a big impact.

My favourite obit of all time of hers is the one of Bill Millin: https://www.economist.com/obituary/2010/08/26/bill-millin (https://archive.is/iZifs)


Oh, no, now I have to go dig out some of mine....

The first really big one I wrote was the ~7000 line installer for the Enrust CA and directory, which ran on, well, all Unixes at that time. It didn't initially, of course, but it grew with customer demand.

The installation itself wasn't especially complicated, but upgrades were, a little, and this was back when every utility on every Unix had slight variations.

Much of the script was figuring out and managing those differences, much was error detection and recovery and rollback, some was a very primitive form of package and dependency management....

DEC's Unix (the other one, not Ultrix) was the most baffling. It took me days to realize that all command line utilities truncated their output at column width. Every single one. Over 30 years later and that one still stands out.

Every release of HP-UX had breaking changes, and we covered 6.5 to 11, IIRC. I barely remember Ultrix or the Novell one or Next, or Sequent. I do remember AIX as being weird but I don't remember why. And of course even Sun's three/four OS's had their differences (SunOS pre 4.1.3; 4.1.3; Solaris pre 2; and 2+) but they had great FMs. The best.


Most explanations of C++'s std::move fail because they don't focus on its actual effect: controlling function overloading.

Most developers have no trouble getting the idea of C++'s function overloading for parameter types that are totally different, e.g. it's clear what foo("xyz") will call if you have:

   void foo(int x);
   void foo(std::string x);
It's also not too hard to get the idea with const and mutable references:

   void foo(std::string& x);
   void foo(const std::string& x);
Rvalue references allow another possibility:

   void foo(std::string&& x);
   void foo(const std::string& x);
(Technically it's also possible to overload with rvalue and non-const regular references, or even all three, but this is rarely done in practice).

In this pairing, the first option would be chosen for a temporary object (e.g. foo(std::string("xyz")) or just foo("xyz")), while the second would be chosen if passing in a named variable (std::string x; foo(x)). In practice, the reason you bother to do this is so the the first overload can pilfer memory resources from its argument (whereas, presumably, the second will need to do a copy).

The point of std::move() is to choose the first overload. This has the consequence that its argument will probably end up being modified (by foo()) even though std::move() itself does not contain any substantial code.

All of the above applies to constructors, since they are functions and they can also be overloaded. Therefore, the following function is very similar in most practical situations since std::string has overloaded copy and move constructors:

   void foo(std::string x);

> This can lead to scalability issues in large clusters, as the number of connections that each node needs to maintain grows quadratically with the number of nodes in the cluster.

No, the total number of dist connections grows quadratically with the number of nodes, but the number of dist connections each node makes grows linearally.

> Not only that, in order to keep the cluster connected, each node periodically sends heartbeat messages to every other node in the cluster.

IIRC, heat beats are once every 30 seconds by default.

> This can lead to a lot of network traffic in large clusters, which can put a strain on the network.

Lets say I'm right about 30 seconds between heart beats, and you've got 1000 nodes. Every 30 seconds each node sends out 999 heartbeats (which almost certainly fit in a single tcp packet each, maybe less if they're piggybacking on real data exchanges). That's 999,000 packets every second, or 33k pps across your whole cluster. For reference, GigE line rate with full 1500 mtu packets is 80k pps. If you actually have 1000 nodes worth of work, the heartbeats are not at all a big deal.

> Historically, a "large" cluster in Erlang was considered to be around 50-100 nodes. This may have changed in recent years, but it's still something to be aware of when designing distributed Erlang systems.

I don't have recent numbers, but Rick Reed's presentation at Erlang Factory in 2014 shows a dist cluster with 400 nodes. I'm pretty sure I saw 1000+ node clusters too. I left WhatsApp in 2019, and any public presentations from WA are less about raw scale, because it's passe.

Really, 1000 dist connections is nothing when you're managing 500k client connections. Dist connections weren't even a big deal when we went to smaller nodes in FB.

It's good to have a solid backend network, and to try to bias towards fewer larger nodes, rather than more smaller nodes. If you want to play with large scale dist, so you spin up 1000 low cpu, low memory VMs, you might have some trouble. It makes sense to start with small nodes and whatever number makes you comfortable for availability, and then when you run into limits, reach for bigger nodes until you get to the point where adding nodes is more cost effective: WA ran dual xeon 2690 servers before the move to FB infra; facebook had better economics with smaller single Xeon D nodes; I dunno what makes sense today, maybe a single socket Epyc?


I was a fine dining chef for 17 years. If you were in San Francisco during the 90s, I might have cooked for you. A simple way to increase demand is to remove or change the least popular menu item every week or every month. This technique has been so successful for me I wouldn't waste much time doing anything else. Once I made a Napoleon pastry for a desert special while at La Folie on Polk st. which sadly closed recently. I had extra pastry cream and made Paris-Brest because we never threw out food. One waitress sold Napoleon on every table and the other waitress sold nothing but Paris-Brest. The dishes were fundamentally the same so I made both anytime I did one or the other as a special because waitstaff for reasons not explained in the article will sell nothing but one and others the other. I made cheese dishes to sell before the desert and fruit soups in the summer. This is the mid 90s and we were tracking data. The chef pulled me aside and showed me the sales from the previous month because I sold 1.2 deserts per customer.

Nonetheless, for the last six years I cooked I was a private chef on a mega yacht. People ask me if the guests told me what they want to eat. I say never because I never asked what people want to eat. I cooked what I want to eat and then make enough for the guests and crew. It is the best menu strategy. In fine dining, the customers make decisions all day long and in the case of being a private yacht chef, the guests are making million and billion dollar decisions. The last thing they want to do is have to decide what to eat for dinner. The family I cooked for rarely ate off the boat. And when they did it was because I said something like, "I hear there is a very good restaurant in St. Barts named ....," which was code for "I want the night off."

I believe the reason one waitress would sell every last Paris-Brest and the other would sell every last Napoleon was because they told the guests in the restaurant what they wanted to eat for desert. They made the decision for the guests.


> He ruled out magnesium, which is best per unit weight in compressive buckling but is brittle and difficult to extrude.

There's a fascinating, and very new, class of nano-laminate magnesium alloys called Long Period Stacking-Ordered (LPSO) alloys. These are very lean -- the standard version is 97% Mg + 1% Zn + 2% Y -- and they have outstanding mechanical properties. At an equal weight, they're much stronger and stiffer than 6061 aluminum, and the kicker is that this is generally true only if they're extruded. If they're not extruded, the laminate-like grain structure doesn't form properly.

Could make excellent bike frames.

Magnesium corrosion would still be a problem, though. I got some LPSO-Mg samples from Fuji Light Metals, in Japan, and they were quite badly degraded within weeks.


Really cool solution! One question: maybe I missed it, but there's no technical reason the tag bits could not use the entire range of exponent bits, no? Other than the fact that having up to 2048 tags would be ridiculously branchy, I guess.

Here's a variation I just thought of, which probably has a few footguns I'm overlooking right now: use the seven highest exponent bits instead of three. Then we can directly read the most significant byte of a double, skipping the need for a rotation altogether.

After reading the most significant byte, subtract 16 (or 0b00010000) from it, then mask out the sign. To test for unboxed doubles, test if the resulting value is bigger than 15. If so, it is an unboxed double, otherwise the lower four bits of the byte are available as alternative tags (so 16 tags).

Effectively, we adjusted the exponent range 2⁻⁷⁶⁷..2⁻⁵¹¹ into the 0b(0)0000000 - 0b(0)0001111 range and made them boxed doubles. Every other double, which is 15/16th of all possible doubles, is now unboxed. This includes subnormals, zero, all NaN encodings (so it even can exist in superposition with a NaN or NuN tagging system I guess?) and both infinities.

To be clear, this is off the top of my head so maybe I made a few crucial mistakes here.


My claim to fame on the 20th anniversary (84 views thus far) is to have completed Ravenholm without inflicting any deaths or receiving any injury:

https://www.youtube.com/watch?v=D_XN_RwjnqM


My favorite fun fact is that, sandwiched between his revolutionary work on circuits and information theory, his actual PhD dissertation was on genetics; like something kind of unrelated to the rest of his life's work and largely forgotten. As a current PhD candidate, I think about that a lot.

For a while it was possible to pay Elwood Edwards to record a short message (https://web.archive.org/web/20080613203307/http://www.makinw...). In 2002, I had him record "Mail classified by POPFile" for my POPFile machine learning email classifier (https://getpopfile.org).

You can listen to it here: https://soundcloud.com/john-graham-cumming/mail-classified-b...

I paid $30 for that. And him saying "Use the source, Luke!"


As a jazz aficionado, I am very familiar with Quincy Jones’ immense contributions to music. I am a very big fan of the albums he produced, such as “The Dude” and “Back on the Block.”

What is less well known is Quincy Jones’ involvement with computing. At one point he was on the advisory committee for the ACM Computers in Entertainment Magazine (https://dl.acm.org/doi/10.1145/973801.973803), and if I remember correctly, he was on the board of former Xerox PARC researcher Alan Kay’s Viewpoints Research Institute. I’ve been wanting to know more about Quincy Jones’ involvement with computing since I first learned about this a few years ago.

Rest in peace. Quincy Jones is a legendary figure.


This is known as bang-bang control, a very basic form of negative feedback.

The key insight, which may not be emphasized enough in the article, is that the vessel can only rise to above 100C once all the water has changed phase (boiled).

I think this is the same principle explaining why beach popsicle vendors can carry many items on a hot summer day without them all melting right away. There is insulation, for sure, but in addition the temperature of what is effectively a large volume ice cube block must rise above 0C before the popsicles can begin to change phase (melt).

In the rice cooker, this property is harnessed while a "bimetallic switch measured the temperature in the external pot". The bimetallic component means that one metal heats and expands faster than the other, eventually breaking the circuit.

If memory serves, this same trick is used in older car model turn signal lights, to produce the periodic on/off switching.

I am not asian but enjoy my rice cooker every day. I love simple robust engineering.


Apologies for the long response.

I am only partially qualified in that I am not a professional archeologist, but I have done post-doctoral archeological studies and have read enough archeological studies to understand the larger academic context.

It is not possible to present all the data informing a judgment in such a short work. Even in a book, it would not be possible. Thus it is common in archeology for papers to be written as part of an ongoing conversation / debate with the community - which would be defined as the small handful of other archeologists doing serious research on the same specific subject matter.

Part of that context here is that these tombs are well-established to be the royal tombs of Alexander's family, spanning a few generations including his father and his son. This is one of the most heavily studied sites in Greece for obvious reasons, and that is not something anybody is trying to prove.

In that context, his arguments are trying to identify any body as one among millions, but as one among a small handful of under ten possibilities.

At the same time, the fact that he is not a native English speaker and general archeological style come into play. For example:

"the painter must have watched a Persian gazelle in Persia, since he painted it so naturalistically (contra Brecoulaki Citation2006). So the painter of Tomb II has to be Philoxenus of Eretria" sounds like a massive leap, and it is. He continues:

"... Tomb I (Tomb of Persephone) must have been painted hastily by Nicomachus of Thebes (Andronikos Citation1984; Borza Citation1987; Brecoulaki et al. Citation2023, 100), who was a very fast painter (Saatsoglou-Paliadeli Citation2011, 286) and was famous for painting the Rape of Persephone (Pliny, N. H. 35.108–109), perhaps that of Tomb I."

Another huge leap, both 'presented as conclusions'. However he then continues to indicate these are just hypotheses: "These hypotheses are consistent with the dates of the tombs..."

So his English language use of presenting things factually does not indicate certainty in the way the words would be used in everyday speech. He seems to perhaps misunderstand the force of the terms, but also appears to be working within the context of the conversation with other archeologists I mentioned to start: They all know every affirmation is as "probably", rarely anything more. So it is relatively common shorthand of the craft in that sense.

I believe you are overthinking his responses to other authors, although I understand the culture shock. It is an ongoing conversation and archeologists tend to be blunt in their assessments. Add Greek bluntness on top of this, and it does not seem to matter to the material.

As to your last question, is this legitimate research? The answer overall appears to be yes, although I could see several points (such as the identification of artists I quoted above, and various items I noticed), which I would never have put into ink the way he did. Still, most of his arguments are compelling. It is a shame that the aggressiveness of a few affirmations detract from the overall value of his work. Archeology is not code nor is it physics. It does not pursue universal truths that are more easy to verify through repeated experiments, but unique historical ones which necessarily attempt to interweave physical details and ancient historical records. Each field has its own level of certainty, and the fact that we cannot establish these details with the same certainty as we can establish the chemical formula for water does not make them useless, or pure inventions. Far from it.


Check out this talk: https://www.youtube.com/watch?v=dF_9YcehCZo

The source code isn't hiding in a repo somewhere for security reasons — it's spread around on various pieces of paper and computers over the last 50 years. There isn't a single source of truth. Adds a whole other level of wizardry to keeping the thing running.


I worked in the factory. I remember it well even from the one photo. The factory had a Winchester Mystery House air about it. It had been added to many times over the decades. So it was more like Rome and less like Turino with no obvious way to get from here to there. They made typewriters there well into the 80s. Olivetti had commuter buses in Ivrea and Milan well before Google buses were invented. They had wine at the company cafeteria for lunch. It was the 80s and I would bet my boss an ice cream on NBA games which I already knew the scores of from an email. Hotel La Serra was structured like a typewriter. A river runs through Ivrea and it had a kayaking center.

I'd love to go back and take a tour.


I worked at the Library of Congress on their Digital Preservation Project, circa 2001-2003. The stated goal was to "digitize all of the Library's collections" and while most people think of books, I was in the Motion Picture Broadcast and Recorded Sound Division.

In our collection were Thomas Edison's first motion pictures, wire spool recordings from reporters at D-Day, and LPs of some of the greatest musicians of all time. And that was just our Division. Others - like American Heritage - had photos from the US Civil War and more.

Anyway, while the Rights information is one big, ugly tangled web, the other side is the hardware to read the formats. Much of the media is fragile and/or dangerous to use so you have to be exceptionally careful. Then you have to document all the settings you used because imagine that three months from now, you learn some filter you used was wrong or the hardware was misconfigured.. you need to go back and understand what was affected how.

Cool space. I wish I'd worked there longer.


So when he DM’d me to say that he had “a hell of a story”—promising “one-time pads! 8-bit computers! Flight attendants smuggling floppies full of random numbers into South Africa!”—I responded.

Ha ha ha. Yes, that was literally my very short pitch to Steven about Tim Jenkin's story!

The actual DM: "I think this has the makings of a hell of a story: https://blog.jgc.org/2024/09/cracking-old-zip-file-to-help-o... If you want I can connect you with Tim Jenkin. One time pads! 8-bit computers! Flights attendants smuggling floppies full of random numbers into South Africa!"


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: