Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment, please tell us at hn@ycombinator.com so we can add it here. There are tons more out there...

The Weierstrass function is cool but the undisputed champion of calculus counterexamples has to be the Dirichlet function[1]

f(x) = 1 if x is rational, 0 otherwise.

It is defined over all real numbers but continuous nowhere. Also if you take the Dirichlet function and multiply it by x so you get

g(x) = x if x is rational, 0 otherwise

…then you have something that is continuous at exactly one place (0) and nowhere else, which also is pretty spectacular.

[1] https://mathworld.wolfram.com/DirichletFunction.html


Super impressive, and awesome to see that you were able to use Framework Laptop hinges. Let me know if you need more. We have a ton of remaining 3.3kg ones!

My family’s phone number when I was a child was both a palindrome and a prime: 7984897.

My parents had had the number for two decades without noticing it was a palindrome. I still remember my father’s delight when he got off a phone call with a friend: “Doug just said, ‘Hey, I dialed your number backwards and it was still you who answered.’ I never noticed that before!”

A few years later, around 1973, one of the other math nerds at my high school liked to factor seven-digit phone numbers by hand just for fun. I was then taking a programming class—Fortran IV, punch cards—and one of my self-initiated projects was to write a prime factoring program. I got the program to work, and, inspired by my friend, I started factoring various phone numbers. Imagine my own delight when I learned that my home phone number was not only a palindrome but also prime.

Postscript: The reason we hadn’t noticed that 7984897 was a palindrome was because, until around 1970, phone numbers in our area were written and spoken with the telephone exchange name [1]. When I was small, I learned our phone number as “SYcamore 8 4 8 9 7” or “S Y 8 4 8 9 7.” We thought of the first two digits as letters, not as numbers.

Second postscript: I lost contact with that prime-factoring friend after high school. I see now that she went on to earn a Ph.D. in mathematics, specialized in number theory, and had an Erdős number of 1. In 1985, she published a paper titled “How Often Is the Number of Divisors of n a Divisor of n?” [2]. She died two years ago, at the age of sixty-six [3].

[1] https://en.wikipedia.org/wiki/Telephone_exchange_names

[2] https://www.sciencedirect.com/science/article/pii/0022314X85...

[3] https://www.legacy.com/us/obituaries/legacyremembers/claudia...


If you were around in the 80's and 90's you might have already memorized the prime 8675309 (https://en.wikipedia.org/wiki/867-5309/Jenny). It's also a twin prime, so you can add 2 to get another prime (8675311).

Reminds me of the time I turned myself into a Van de Graff generator at work.

I was a theater projectionist, back when you had 20 minute reels you had to constantly change, while babysitting two high-voltage, water-cooled, carbon arc projectors. Sometimes the film would break and you’d have to splice it. So when the theater got a print in, you had to count and log the number of splices for each reel, then the next theater would do the same and retire the print when it got too spliced up (plus, sometimes if it was the last night of a run, some lazy projectionists would splice it in place with masking tape and then you’d have to fix it). Sometimes you had to splice in new trailers or remove inappropriate ones as well.

Anyway, you counted splices by rapidly winding through the reel with a benchtop motor with a speed control belted to a takeup reel while the source spun freely. Then, while letting the film slide between your fingers, counting each “bump” you felt as it wound through. I was told to ground myself by touching the metal switch plate of the speed control knob with my other hand. One night I forgot and let go until my hair started rising. I’d gone through most of the reel at a very high speed and acquired its charge.

I reached for the switch plate and shot an 8-10” arcing discharge between the plate and my fingers.

Lesson learned, I held the switch plate from then on.


> Originally, if you typed an unknown command, it would just say "this is not a git command".

Back in the 70s, Hal Finney was writing a BASIC interpreter to fit in 2K of ROM on the Mattel Intellivision system. This meant every byte was precious. To report a syntax error, he shortened the message for all errors to:

    EH?
I still laugh about that. He was quite proud of it.

I was there at the time and until the end.

That cartoon meme with the dog sitting with a cup of coffee or whatever and telling himself "This is fine", while everything is on fire, is probably the best way to describe how things felt at nokia back then.


Daubechies wavelets are such incredibly strange and beautiful objects, particularly for how deviant they are compared to everything you are typically familiar with when you are starting your signal processing journey… if it’s possible for a mathematical construction to be punk, then it would be the Daubechies wavelets.

Co-founder of Quickwit here. Seeing our acquisition by Datadog on the HN front page feels like a truly full-circle moment.

HN has been interwoven with Quickwit's journey from the very beginning. Looking back, it's striking to see how our progress is literally chronicled in our HN front-page posts:

- Searching the web for under $1000/month [0]

- A Rust optimization story [1]

- Decentralized cluster membership in Rust [2]

- Filtering a vector with SIMD instructions (AVX-2 and AVX-512) [3]

- Efficient indexing with Quickwit Rust actor framework [4]

- A compressed indexable bitset [5]

- Show HN: Quickwit – OSS Alternative to Elasticsearch, Splunk, Datadog [6]

- Quickwit 0.8: Indexing and Search at Petabyte Scale [7]

- Tantivy – full-text search engine library inspired by Apache Lucene [8]

- Binance built a 100PB log service with Quickwit [9]

- Datadog acquires Quickwit [10]

Each of these front-page appearances was a milestone for us. We put our hearts into writing those engineering articles, hoping to contribute something valuable to our community.

I'm convinced HN played a key role in Quickwit's success by providing visibility, positive feedback, critical comments, and leads that contacted us directly after a front-page post. This community's authenticity and passion for technology are unparalleled. And we're incredibly grateful for this.

Thank you all :)

[0] https://news.ycombinator.com/item?id=27074481

[1] https://news.ycombinator.com/item?id=28955461

[2] https://news.ycombinator.com/item?id=31190586

[3] https://news.ycombinator.com/item?id=32674040

[4] https://news.ycombinator.com/item?id=35785421

[5] https://news.ycombinator.com/item?id=36519467

[6] https://news.ycombinator.com/item?id=38902042

[7] https://news.ycombinator.com/item?id=39756367

[8] https://news.ycombinator.com/item?id=40492834

[9] https://news.ycombinator.com/item?id=40935701

[10] https://news.ycombinator.com/item?id=42648043


I worked for Peter Kirstein for many years - he always had wonderful stories to tell.

In the article Peter talks about the temporary import license for the original ARPAnet equipment. The delayed VAT and duty bill for this gear prevented anyone else taking over the UK internet in the early days because the bill would have then become due. But he didn't mention that eventually if the original ARPAnet equipment was ever scrapped, the bill would also become due.

When I was first at UCL in the mid 1980s until well into the 90s, all that equipment was stored disused in the mens toilets in the basement. Eventually Peter decided someone had to do something about it, but he couldn't afford the budget to ship all this gear back to the US. Peter always seemed to delight in finding loopholes, so he pulled some strings. Peter was always very well connected - UCL even ran the .int and nato.int domains for a long time. So, at some point someone from UCL drove a truck full of obsolete ARPAnet gear to some American Air Force base in East Anglia that was technically US territory. Someone from the US air force gave them a receipt, and the gear was officially exported. And there it was left, in the US Air Force garbage. Shame it didn't end up in a museum, but that would have required paying the VAT bill.


That March 1977 map always brings back a flood of memories to this old-timer.

Happy nights spent hacking in the Harvard graduate computer center next to the PDP-1/PDP-10 (Harv-1, Harv-10), getting calls on the IMP phone in the middle of the night from the BBN network operations asking me to reboot it manually as it had gotten wedged...

And, next to me, Bill Gates writing his first assembler/linker/simulator for the Altair 8080... (I tried talking him out of this microcomputer distraction -- we have the whole world of mainframes at our fingertips! -- without success.)

(Edit:) We also would play the game of telnet-till-you-die, going from machine to machine around the world (no passwords on guest accounts in the early days), until the connection died somewhere along the way.

Plus, once the hackers came along, Geoff Steckel (systems guy on the PDP-10) wrote a little logger to record all incoming guests keystrokes on an old teletype, so we could watch them attempting to hack the system.


Last Saturday, myself and two other firefighters managed to find a woman lost in a maze of 40+ miles of trails. Her hip had dislocated, she could not move. The temperatures were in the upper 20s (F) (-3C or so) with serious windchill amidst 35mph/56kph wind gusts. It was extremely dark. We stayed with her and tried to keep her as warm as possible until a UTV arrived to extricate her. I was home by 01:00, after 4hrs outside under the stars.

In the end we didn't really do much at all, but it felt like one of the most meaningful nights of my entire life.


In 2005, my paper on breaking RSA by observing a single private-key operation from a different hyperthread sharing the same L1 cache -- literally the first publication of a cryptographic attack exploiting shared caches -- was rejected from the cryptology preprint archive on the grounds that "it was about CPU architecture, not cryptography". Rejection from journals is like rejection from VCs -- it happens all the time and often not for any good reason.

(That paper has now been cited 971 times according to Google Scholar, despite never appearing in a journal.)


The original answer to "why does FastMail use their own hardware" is that when I started the company in 1999 there weren't many options. I actually originally used a single bare metal server at Rackspace, which at that time was a small scrappy startup. IIRC it cost $70/month. There weren't really practical VPS or SaaS alternatives back then for what I needed.

Rob (the author of the linked article) joined a few months later, and when we got too big for our Rackspace server, we looked at the cost of buying something and doing colo instead. The biggest challenge was trying to convince a vendor to let me use my Australian credit card but ship the server to a US address (we decided to use NYI for colo, based in NY). It turned out that IBM were able to do that, so they got our business. Both IBM and NYI were great for handling remote hands and hardware issues, which obviously we couldn't do from Australia.

A little bit later Bron joined us, and he automated absolutely everything, so that we were able to just have NYI plug in a new machine and it would set itself up from scratch. This all just used regular Linux capabilities and simple open source tools, plus of course a whole lot of Perl.

As the fortunes of AWS et al rose and rose and rose, I kept looking at their pricing at features and kept wondering what I was missing. They seemed orders of magnitude more expensive for something that was more complex to manage and would have locked us into a specific vendor's tooling. But everyone seemed to be flocking to them.

To this day I still use bare metal servers for pretty much everything, and still love having the ability to use simple universally-applicable tools like plain Linux, Bash, Perl, Python, and SSH, to handle everything cheaply and reliably.

I've been doing some planning over the last couple of years on teaching a course on how to do all this, although I was worried that folks are too locked in to SaaS stuff -- but perhaps things are changing and there might be interest in that after all?...


Good times. I was the developer at Microsoft who designed the Xbox 360 hardware security, wrote all the boot loaders, and the hypervisor code.

Note to self: you should have added random delays before and after making the POST code visible on the external pins.


The key clarification is in one of the comments: if you want to treat partial derivatives like fractions, you need to carry the "constant with respect to foo" modifier along with both nominator and denominator.

Once you do that, it's clear that you can't cancel "dx at constant z" with "dx at constant y" etc. And then the remaining logic works out nicely (see thermodynamics for a perfect application of this).


In my province in Canada (Quebec) some IT worker in the government made a template .doc document that's widely used. They exported it with the document title "sdf fdsfdsfg". This title isn't in the actual printed document but it's in the metadata. If you Google that string in quotes you'll find tons of official documents with that title.

https://www.google.com/search?q=%22Sdf+fdsfsdfg%22


For the past year or so, I've been trying (off and on) to formalise part of my undergraduate complex analysis course in Lean. It has been instructional and rewarding, but also sometimes frustrating. I only recently was able to fully define polar form as a bijection from C* to (-pi,pi] x R, but that's because I insisted on defining the complex numbers, (power) series, exp and sin "from scratch", even though they're of course already in mathlib.

Many of my troubles probably come from the fact that I only have a BSc in maths and that I'm not very familiar with Lean/mathlib and don't have anyone guiding me (although I did ask some questions in the very helpful Zulip community). Many results in mathlib are stated in rather abstract ways and it can be hard to figure out how they relate to certain standard undergraduate theorems - or whether those are in mathlib at all. This certainly makes sense for the research maths community, but it was definitely a stumbling block for me (and probably would be one if Lean were used more in teaching - but this is something that could be sorted out given more time).

In terms of proof automation, I believe we're not there yet. There are too many things that are absolutely harder to prove than they should be (although I'm sure that there's also a lot of tricks that I'm just not aware of). My biggest gripe concerns casts, in "regular" mathematics, the real numbers are a subset of the complex numbers and so things that are true for all complex numbers are automatically true for all reals[0], but in Lean they're different types with an injective map / cast operation and there is a lot of back-and-forth conversion that has to be done and muddies the essence of the proof, especially when you have "stacks" of casts, e.g. a natural number cast to a real cast to a complex number etc.

Of course, this is somewhat specific to the subject, I imagine that in other areas, e.g. algebra, dealing with explicit maps is much more natural.

[0] This is technically only true for sentences without existential quantifiers.


I took that class with you! It was amazing. Here are my notes: https://wstein.org/books/ribet-stein/

This reminds me of a fun experience I had in grad school. I was working on writing some fast code to compute something I can no longer explain, to help my advisor in his computational approach to the Birch and Swinnerton-Dyer conjecture. I gave a talk at a number theory seminar a few towns over, and was asked if I was doing this in hopes of reinforcing the evidence behind the conjecture. I said with a grin, "well, no, I'd much rather find a counterexample." The crowd went wild; I've never made a group of experts so angry as that day.

Well, I never was much of a number theorist. I never did come to understand the basic definitions behind the BSD conjecture. Number theory is so old, so deep, that writing a PhD on the topic is the step one takes to become a novice. Where I say that I didn't understand the definitions, I certainly knew them and understood the notation. But there's a depth of intuition that I never arrived at. So the uproar of experts, angry that I had the audacity to hope for a counterexample, left me more curious than shaken: what do they see, that they cannot yet put words to?

I am delighted by these advances in formalism. It makes the field feel infinitely more approachable, as I was a programmer long before I called myself a mathematician, and programming is still my "native tongue." To the engineers despairing at this story, take it from me: this article shows that our anxiety at the perceived lack of formalism is justified, but we must remember that anxiety is a feeling -- and the proper response to that feeling is curiosity, not avoidance.


The obituaries are often my favourite part of the Economist. They are edited by Ann Wroe - https://en.wikipedia.org/wiki/Ann_Wroe - a true legend.

They focus mostly on long-forgotten people and create an intense glimpse into the short timeframe when their life made a big impact.

My favourite obit of all time of hers is the one of Bill Millin: https://www.economist.com/obituary/2010/08/26/bill-millin (https://archive.is/iZifs)


Oh, no, now I have to go dig out some of mine....

The first really big one I wrote was the ~7000 line installer for the Enrust CA and directory, which ran on, well, all Unixes at that time. It didn't initially, of course, but it grew with customer demand.

The installation itself wasn't especially complicated, but upgrades were, a little, and this was back when every utility on every Unix had slight variations.

Much of the script was figuring out and managing those differences, much was error detection and recovery and rollback, some was a very primitive form of package and dependency management....

DEC's Unix (the other one, not Ultrix) was the most baffling. It took me days to realize that all command line utilities truncated their output at column width. Every single one. Over 30 years later and that one still stands out.

Every release of HP-UX had breaking changes, and we covered 6.5 to 11, IIRC. I barely remember Ultrix or the Novell one or Next, or Sequent. I do remember AIX as being weird but I don't remember why. And of course even Sun's three/four OS's had their differences (SunOS pre 4.1.3; 4.1.3; Solaris pre 2; and 2+) but they had great FMs. The best.


Most explanations of C++'s std::move fail because they don't focus on its actual effect: controlling function overloading.

Most developers have no trouble getting the idea of C++'s function overloading for parameter types that are totally different, e.g. it's clear what foo("xyz") will call if you have:

   void foo(int x);
   void foo(std::string x);
It's also not too hard to get the idea with const and mutable references:

   void foo(std::string& x);
   void foo(const std::string& x);
Rvalue references allow another possibility:

   void foo(std::string&& x);
   void foo(const std::string& x);
(Technically it's also possible to overload with rvalue and non-const regular references, or even all three, but this is rarely done in practice).

In this pairing, the first option would be chosen for a temporary object (e.g. foo(std::string("xyz")) or just foo("xyz")), while the second would be chosen if passing in a named variable (std::string x; foo(x)). In practice, the reason you bother to do this is so the the first overload can pilfer memory resources from its argument (whereas, presumably, the second will need to do a copy).

The point of std::move() is to choose the first overload. This has the consequence that its argument will probably end up being modified (by foo()) even though std::move() itself does not contain any substantial code.

All of the above applies to constructors, since they are functions and they can also be overloaded. Therefore, the following function is very similar in most practical situations since std::string has overloaded copy and move constructors:

   void foo(std::string x);

> This can lead to scalability issues in large clusters, as the number of connections that each node needs to maintain grows quadratically with the number of nodes in the cluster.

No, the total number of dist connections grows quadratically with the number of nodes, but the number of dist connections each node makes grows linearally.

> Not only that, in order to keep the cluster connected, each node periodically sends heartbeat messages to every other node in the cluster.

IIRC, heat beats are once every 30 seconds by default.

> This can lead to a lot of network traffic in large clusters, which can put a strain on the network.

Lets say I'm right about 30 seconds between heart beats, and you've got 1000 nodes. Every 30 seconds each node sends out 999 heartbeats (which almost certainly fit in a single tcp packet each, maybe less if they're piggybacking on real data exchanges). That's 999,000 packets every second, or 33k pps across your whole cluster. For reference, GigE line rate with full 1500 mtu packets is 80k pps. If you actually have 1000 nodes worth of work, the heartbeats are not at all a big deal.

> Historically, a "large" cluster in Erlang was considered to be around 50-100 nodes. This may have changed in recent years, but it's still something to be aware of when designing distributed Erlang systems.

I don't have recent numbers, but Rick Reed's presentation at Erlang Factory in 2014 shows a dist cluster with 400 nodes. I'm pretty sure I saw 1000+ node clusters too. I left WhatsApp in 2019, and any public presentations from WA are less about raw scale, because it's passe.

Really, 1000 dist connections is nothing when you're managing 500k client connections. Dist connections weren't even a big deal when we went to smaller nodes in FB.

It's good to have a solid backend network, and to try to bias towards fewer larger nodes, rather than more smaller nodes. If you want to play with large scale dist, so you spin up 1000 low cpu, low memory VMs, you might have some trouble. It makes sense to start with small nodes and whatever number makes you comfortable for availability, and then when you run into limits, reach for bigger nodes until you get to the point where adding nodes is more cost effective: WA ran dual xeon 2690 servers before the move to FB infra; facebook had better economics with smaller single Xeon D nodes; I dunno what makes sense today, maybe a single socket Epyc?


I was a fine dining chef for 17 years. If you were in San Francisco during the 90s, I might have cooked for you. A simple way to increase demand is to remove or change the least popular menu item every week or every month. This technique has been so successful for me I wouldn't waste much time doing anything else. Once I made a Napoleon pastry for a desert special while at La Folie on Polk st. which sadly closed recently. I had extra pastry cream and made Paris-Brest because we never threw out food. One waitress sold Napoleon on every table and the other waitress sold nothing but Paris-Brest. The dishes were fundamentally the same so I made both anytime I did one or the other as a special because waitstaff for reasons not explained in the article will sell nothing but one and others the other. I made cheese dishes to sell before the desert and fruit soups in the summer. This is the mid 90s and we were tracking data. The chef pulled me aside and showed me the sales from the previous month because I sold 1.2 deserts per customer.

Nonetheless, for the last six years I cooked I was a private chef on a mega yacht. People ask me if the guests told me what they want to eat. I say never because I never asked what people want to eat. I cooked what I want to eat and then make enough for the guests and crew. It is the best menu strategy. In fine dining, the customers make decisions all day long and in the case of being a private yacht chef, the guests are making million and billion dollar decisions. The last thing they want to do is have to decide what to eat for dinner. The family I cooked for rarely ate off the boat. And when they did it was because I said something like, "I hear there is a very good restaurant in St. Barts named ....," which was code for "I want the night off."

I believe the reason one waitress would sell every last Paris-Brest and the other would sell every last Napoleon was because they told the guests in the restaurant what they wanted to eat for desert. They made the decision for the guests.


> He ruled out magnesium, which is best per unit weight in compressive buckling but is brittle and difficult to extrude.

There's a fascinating, and very new, class of nano-laminate magnesium alloys called Long Period Stacking-Ordered (LPSO) alloys. These are very lean -- the standard version is 97% Mg + 1% Zn + 2% Y -- and they have outstanding mechanical properties. At an equal weight, they're much stronger and stiffer than 6061 aluminum, and the kicker is that this is generally true only if they're extruded. If they're not extruded, the laminate-like grain structure doesn't form properly.

Could make excellent bike frames.

Magnesium corrosion would still be a problem, though. I got some LPSO-Mg samples from Fuji Light Metals, in Japan, and they were quite badly degraded within weeks.


Really cool solution! One question: maybe I missed it, but there's no technical reason the tag bits could not use the entire range of exponent bits, no? Other than the fact that having up to 2048 tags would be ridiculously branchy, I guess.

Here's a variation I just thought of, which probably has a few footguns I'm overlooking right now: use the seven highest exponent bits instead of three. Then we can directly read the most significant byte of a double, skipping the need for a rotation altogether.

After reading the most significant byte, subtract 16 (or 0b00010000) from it, then mask out the sign. To test for unboxed doubles, test if the resulting value is bigger than 15. If so, it is an unboxed double, otherwise the lower four bits of the byte are available as alternative tags (so 16 tags).

Effectively, we adjusted the exponent range 2⁻⁷⁶⁷..2⁻⁵¹¹ into the 0b(0)0000000 - 0b(0)0001111 range and made them boxed doubles. Every other double, which is 15/16th of all possible doubles, is now unboxed. This includes subnormals, zero, all NaN encodings (so it even can exist in superposition with a NaN or NuN tagging system I guess?) and both infinities.

To be clear, this is off the top of my head so maybe I made a few crucial mistakes here.


My claim to fame on the 20th anniversary (84 views thus far) is to have completed Ravenholm without inflicting any deaths or receiving any injury:

https://www.youtube.com/watch?v=D_XN_RwjnqM


My favorite fun fact is that, sandwiched between his revolutionary work on circuits and information theory, his actual PhD dissertation was on genetics; like something kind of unrelated to the rest of his life's work and largely forgotten. As a current PhD candidate, I think about that a lot.

For a while it was possible to pay Elwood Edwards to record a short message (https://web.archive.org/web/20080613203307/http://www.makinw...). In 2002, I had him record "Mail classified by POPFile" for my POPFile machine learning email classifier (https://getpopfile.org).

You can listen to it here: https://soundcloud.com/john-graham-cumming/mail-classified-b...

I paid $30 for that. And him saying "Use the source, Luke!"


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: