Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment (or comment tree), please tell us at hn@ycombinator.com so we can add it here.

To make Less (the pager program) behave a bit more as if it used Readline (it doesn't), one can create a `~/.config/lesskey` file with these contents:

  #line-edit
  ^A home
  ^E end
  ^F right
  ^B left
  ^P up
  ^N down
  ^D delete
  ^W word-backspace
  \ed word-delete
  \ef word-right
I think a relatively recent version of Less is required.

I was on the pre-ipo (and post ipo) design team. So its partly my fault lmfao.

> You can choose a soothing music play list in the car and it automatically resumes in the next ride

Oh wow! There is a non zero chance that was implemented because of some feedback I provided as a trusted tester many months ago. I napped my son in them a lot when they were free and just spent my time thinking up things they should do and reporting them in app.


The best analogy of zk-proofs I've heard is to suppose you have found Waldo in "Where's Waldo," and want to prove that you have done this without revealing the location.

You could take a piece of paper (much larger than the picture/book), and cut out a waldo-shaped hole it and position the paper such that he is shown in the hole. Then, when you show it to the challenger, they know that you have found him without you revealing where he is.


So sad! Rest in peace.

I got to meet him in his office and have dinner with him once. It was an unforgettable and hugely influential experience.

Two fun anecdotes that have never left me:

- He taught me that IPIs (inter-processor interrupts) are inherently and hugely expensive. Knowing this has helped me with architectural choices more times than I can count.

- He quoted (I think from someone else) a rebuttal to the idea that Physics is the reality and math is just theory. It goes something like: Math is the reality that physicists sometimes discover. Love it.


Hi! I wrote this book. Ask me anything. I also was a designer on Spore. I'm also trying to feed my 8 month old lunch and he is very excited to asn``wer anything too.

I've actually isolated and sequenced the subject fungus (Parengyodontium album) from terrestrial sources. If you'd like to check out the photos (and DNA), check out:

https://www.inaturalist.org/observations/147456216

https://www.inaturalist.org/observations/150149352


I was one the developers responsible for implementing the netcode on Serious Sam. We often slept under the desks in the offices at croteam after lurking usenet. One post in particular described the QuakeWorld prediction system which inspired us. That night we coded a simplistic mvp as a colleague (hi dan) tested it over an old 486 nix machine acting as a router that we could simulate lag with. This was well before the actual game was built around it

Reminds me. I saw a bizarre fungus growing on an old Airport audio speaker left in storage. The speaker was a 5 foot high metallic tower. And this 4 inch high alien looking thing was growing on it. It had attached itself via a beautiful root like system of tenticles to the metal surface. Some one tried to kick it off and it was so tightly fused to the metal it broke the stem but the root system stayed fused. So they then scraped it off with like a chisel and there was a hole in the metal underneath. It looked like it was eating the metal. These were ancient speakers so that metalic frame was quite thick and heavy and it was really freaky to see how it had been sort of dissolved away. That storage unit hadnt been opened in 2 months. So the growth couldnt have been very old either. Left us all wondering what things would have looked like if no one had bothered to open the unit.

In primary care, I used to smell sinus infections/strep as a patient walked in the room for their “sick visit” and felt confident enough to diagnose without swabbing but I still swabbed anyway to avoid antibiotics for viral infections. I’ve long left primary care for hospital and concierge medicine so now c diff stool and melena get me usually.

There was this one time in residency I had a tiny older lady from a very rural town come in with a festering breast wound. One breast was normal sized and the other was 4x size of the normal one. Almost the size of a medium watermelon. Turns out it was necrotizing breast cancer and severe cellulitis. She didn’t recall much about why it took her this long to seek help. Her kids pleaded her to get it checked out but she had a deep mistrust of healthcare workers it wasn’t until the smell was unbearable for her family that they brought her in. The whole ED smelled of rotten infected flesh and no amount of winter fresh and peppermint oil was able to help the smell. That was about 10 years ago and I still remember that day and smell.


Wasn't expecting my question to hit top of HN. I guess I'll give some context for why I asked it.

I work in quantum error correction, and was trying to collect interesting and quantitative examples of repetition codes being used implicitly in classical systems. Stuff like DRAM storing a 0 or 1 via the presence or absence of 40K electrons [1], undersea cables sending X photons per bit (don't know that one yet), some kind of number for a transistor switching (haven't even decided on the number for that one yet), etc.

A key reason quantum computing is so hard is that by default repetition makes things worse instead of better, because every repetition is another chance for an unintended measurement. So protecting a qubit tends to require special physical properties, like the energy gap of a superconductor, or complex error correction strategies like surface codes. A surface code can easily use 1000 physical qubits to store 1 logical qubit [2], and I wanted to contrast that with the sizes of implicit repetition codes used in classical computing.

1: https://web.mit.edu/rec/www/dramfaq/DRAMFAQ.html

2: https://arxiv.org/abs/1208.0928


After my little sister had her first child and realized how expensive baby stuff is, she started a lucrative side-hustle and ran with it for years. Basically, she bought baby stuff from a warehouse that got their inventory from returns at large retailers like Target and Walmart. She focused almost entirely on baby strollers, but also backyard swing sets for kids, and got it all for pennies-on-the-dollar. She became friendly with the customer service repos at the stroller manufacturers and could usually get replacement parts for free (it's a warranty replacement if the service rep says it is). She knew all the stroller model numbers and their associated various part numbers. She got really good at repairing the strollers in her garage, and then flipping them on Craigslist. Her garage looked like a baby stroller showroom. She made decent money doing it, but the best part is her "customers" (other new mothers, most of them poor) were always so happy and appreciative because of the deal they were getting. Everyone was happy.

The real secret sauce to her side-hustle was the relationship she had with the lady who managed the warehouse where she bought the baby stuff. The warehouses usually have auctions on large lots or pallets of stuff; you bid on whatever's on the pallet, you've got no choice. The lady used to let my sister come to the warehouse periodically (usually just before a big auction) and cherrypick what she wanted, which was always the baby strollers and swing sets. The side-hustle wouldn't have worked without that. (My sister (and her husband) used to flip houses, too, and I think she sold the warehouse lady a house.)


Hi HN the main (more detailed) article is here https://github.com/karpathy/llm.c/discussions/481

Happy to answer questions!


My uncle worked at Estes, the hobby rocket company, for many years. He was always a stickler about calling the black powder propellant sources "motors" and indeed older motors are labeled such. [1] He insisted they were not engines as they had no moving parts and would always correct me when I said "rocket engine." He eventually explained that rocket engines exist, but they are engines with valves and pumps and use liquid fuel (e.g. the Saturn V's F-1 engines), while solid rockets (e.g. Estes' products, or the shuttle's SRBs) are simply motors since they merely consist of burning propellant and a nozzle. Indeed the wiki pages for the F-1 and the SRB are consistent in calling the former engine and the latter motor.

However, at some point since he retired, Estes transitioned to calling them Engine/Motors [2], and now, the primary labelling Estes uses calls them Engines, though Engine/Motor is still printed on the cardboard casing itself. [3]

Interestingly, the Spanish, French and German on the motors still use motor, as Motor, Moteur-Fusee and Raketenmotor, respectively.

Because of that upbringing, I have since treated the words to mean that a motor is something that provides force of motion (thrust or rotation) - it may or may not also be an engine, as in the rocketry examples. An engine is a contraption with moving/interacting parts that uses energy to accomplish some goal - that goal may (F-1, car engine) or may not (cotton gin, search engine) be the propulsion of the contraption itself and what it's attached to.

That said, as a child I made no such distinction, hence the frequent corrections. I am happy to recognize that in common vernacular they are usually synonymous, though it would still sound strange, I think, to call something a 'search motor' (edit: however, see comment by yau8edq12i !) or a 'graphics motor' just as it would be jarring to encounter 'servoengine'.

1: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi...

2: https://www.apogeerockets.com/bmz_cache/f/f8ecac9604d017a5c7...

3: https://estesrockets.com/products/b6-4-engines


These scripts were written by my father in the 1980s, he was a middle school science teacher and wrote these to help him manage classrooms. I think it’s a cool example of the way personal computing changed life in that era; he was never a software engineer, just a guy that needed to automate some tedious work. I encoded the BAS files in utf8 so they are easier to read.

Former journalist here. I spent lots of time in college learning about libel law, and then applying it in my professional life as an editor.

One thing about libel that many people don't understand is that retraction and editing of the content isn't a defense. So where it says "note the libel-friendly phrasing" and "now edited to avoid any possible threats of libel" and "[editor’s note: removed a possibly incorrect claim]" he could still be found guilty of libel if previously published assertions contained non "libel-friendly" phrasing. As long as a defamatory assertion was published at some point, you can still be found guilty of libel.

It probably goes without saying, but it is also not a defense to libel to say that you asserted something to be true merely because there was no evidence to the contrary. Absent a contractual or legal obligation, Lumina had no duty to engage with him and answer his questions. So if Lumina can provide evidence that Trevor asserted things that are demonstrably false, and they damaged Lumina's business, then Trevor can't argue as a defense that he merely had no way of knowing that they were false.

Finally, Trevor seems to be saying in his update that he was merely asking questions -- but it's possible for a court to find that merely phrasing false, defamatory assertions in the form of a question is not an absolute protection against a libel claim.


When I was a kid I had lots of warts on the back of my hand. My mother took me to her cousin, a respected dermatologist. He said that Plan "A" was the "witch doctor cure": I had to steal a banana (buying it would not work). I then had to eat the banana without anyone seeing me, rub the inside of the banana peel on the warts and then bury the peel in the garden, so that it would never be found. Plan "B" was cryotherapy. I tried Plan "A", and it worked! The only concern is that now, almost 70 years later, I am telling the story and the warts could come back :-)

It's not any kind of paradox. Structural unemployment happens when the skills of the work force don't match the needs of employers, so there is both unemployment and difficulty hiring.

Structural unemployment is usually high when there's a rapid change in demand for skills, as of course there is in tech. It results in crazy high salaries too. People with machine learning experience are getting 7-figure offers, while people with jQuery experience can't find jobs.

As an individual, you can both improve the economy AND make fat stacks by learning the skills that are in high demand. As an employer, you can do better by finding skill sets that aren't in high demand, with enough overlap with what you need that you can retrain. There are a lot of unemployed video game programmers right now, so if you can figure out how to use people with those skills you can hire some smart, energetic people at moderate salaries.


In the aughts I worked at Adobe and spent time trying to archive the source code for Photoshop, Illustrator, PostScript, and other apps. Thomas Knoll's original Mac floppy disk backups were available, so I brought in my Mac Plus, with a serial cable to transfer the files to a laptop via Kermit. The first version was 0.54, dated 6 July 1988. The files on the floppies were in various ancient compressed archive formats, but most were readable. I created an archive on a special Perforce server of all the code that I found. Sadly, the earliest Illustrator backups were on a single external disk drive that had gone bad.

I love this paper. For reasons too winding to spell out here, my dad gave me this paper to read when I was in middle school. I was pretty good at math for my age, but not so good that I understood all the math in this paper. (The bit on page 4, "And we have many algorithms from linear algebra that can help us find this dependency" was particularly frustrating as linear algebra was many, many years away for me.) Nevertheless, I was able to build the basics of a factoring program and factor some decently large numbers.

What a testament to the clarity, accessibility, and quality of the writing! This clearly isn't a conference paper or anything, but still very technical.

It wasn't until a decade later when I was doing my undergrad that I learned who Carl Pomerance was, and also who Paul Erdős was, to whom this paper is dedicated. Blew my mind.


(I work at OpenAI.)

It's really how it works.


I paused at the name because there was a great scientist and engineer named Frank Albini in the field of wildfire science. Lo and behold Steve Albini was the son of said Frank Albini. I haven't heard of Steve Albini before, and I'm not familiar with his music, but he clearly had an enormous impact. I just thought that it was interesting that both father and son could leave behind such a large legacy in their respective endeavors.

Former hospital system executive, wrote hacking healthcare for O'Reilly, created the open source ClearHealth/HealthCloud EMR...

One of the few times in managing hospital systems I was actually shocked at unethical behavior was when we took over management of a system that through accidents of history included a series of dentistry clinics. Dentists do not have any equivalent to a hippocratic oath, they have no professional obligation to be honest with their "patients". The overriding operational theme of the clinics was how to defraud "patients" with completely unecessary work to maximize profits and borderline defraud dental insurance. I understand that people have a dim view of ethics in american healthcare but this was what I would consider criminal behavior in a medical setting but as further experiences taught me, is the norm in american dentistry. Suffice it to say that we divested from the dentistry clinics asap.

Here is one swiss study showing that 30% of dentists committed fraud in the studed visits or were so incompetent that their behavior constituted fraud. I would guess that american dentistry is closer to 50%.

Have you had your wisdom teeth out? You are very likely to be the victim of dental fraud.

https://pubmed.ncbi.nlm.nih.gov/3135372/

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3036573


I once bought a far larger supercomputer. It was 1/8 (roughly) of ASCI Blue Mountain. 72 racks. Commissioned in 1998 as #1 or #2 on the TOP500, officially decommissioned in 2004, purchased my 1/8 for $7k in ~2005.

Moving 72 racks was NOT easy. After paying substantial storage fees, I rented a 1500sf warehouse after selling off a few of them and they filled it up. Took a while to get 220V/30A service in there to run just one of them for testing purposes. Installing IRIX was 10x worse than any other OS. Imagine 8 CD's and you had to put them each in 2x during the process. Luckily somebody listed a set on eBay. SGI was either already defunct or just very unfriendly to second hand owners like myself.

The racks ran SGI Origin 2000s with CRAYlink interlinks. Sold 'em off 1-8 at a time, mainly to render farms. Toy Story had been made on similar hardware. The original NFL broadcasts with that magic yellow first down line were synthesized with similar hardware. One customer did the opening credits for a movie with one of my units.

I remember still having half of them around when Bitcoin first came out. It never occurred to me to try to mine with them, though I suspect if I'd been able to provide sufficient electrical service for the remainder, Satoshi and I would've been neck-and-neck for number of bitcoins in our respective wallets.

The whole exercise was probably worthwhile. I learned a lot, even if it does feel like seven lifetimes ago.


Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.


I would say that there is very little danger of a proof in Lean being incorrect.

There is a serious danger, which has nothing to do with bugs in Lean, which is a known problem for software verification and also applies in math: one must read the conclusions carefully to make sure that the right thing is actually proved.

I read Wilshaw's final conclusions carefully, and she did indeed prove what needed to be proved.


At Software Arts I wrote or worked on the IL interpreter for the TRS 80 Model III, the DEC Rainbow, the Vector Graphic, the beginnings of the Apple Lisa port, as well as the IBM PC port. To put you into the state of mind at the time,

- in the pre-PC era, the microcomputer ecosystem was extremely fragmented in terms of architectures, CPUs, and OS's. 6502, z80, 68K, z8000, 8088. DOS, CPM, CPM/86, etc. Our publisher (Personal Software) wanted as much breadth of coverage, as you might imagine

- one strong positive benefit of porting from 6502 assembly to IL and using an interpreter was that it enabled the core code to remain the same while leaving the complex work of paging and/or memory mapping to the interpreter, enabling access to 'extended memory' without touching or needing to re-test the core VisiCalc code. Same goes for display architectures, printer support, file system I/O, etc.

- another strong benefit was the fact that, as the author alludes to, the company was trying to transition to being more than a one hit wonder by creating a symbolic equation solver app - TK!Solver - that shared the interpreter.

Of course, the unavoidable result is that the interpreter - without modern affordances such as JIT compilation - was far less snappy than native code. We optimized the hell out of it and it wasn't unusable, but it did feel laggy.

Fast forward to when I left SoftArts and went across the street to work for my friend Jon Sachs who had just co-founded Lotus with Mitch Kapor. Mitch & Jon bet 100% that the PC would reset the ecosystem, and that the diversity of microcomputers would vanish.

Jon single-handedly wrote 1-2-3 in hand-tuned assembly language. Yes, 1-2-3 was all about creating a killer app out of 1.spreadsheet+2.graphics+3.database. That was all Mitch. But, equally, a killer aspect of 1-2-3 was SPEED. It was mind-blowing. And this was all Jon. Jon's philosophy was that there is no 'killer feature' that was more important than speed.

When things are moving fast and the industry is taking shape, you make the best decisions you can given hunches about the opportunities you spot, and the lay of the technical and market landscape at that moment. You need to make many key technical and business decisions in almost an instant, and in many ways that determines your fate.

Even in retrospect, I think the IL port was the right decision by Dan & Bob given the microcomputing ecosystem at the time. But obviously Mitch & Jon also made the right decision for their own time - just a matter of months later. All of them changed the world.


Back in 1999-2000 there was an "International RoShamBo Programming Competition" [1] where computer bots competed in the game of rock-paper-scissors. The baseline bot participant just selected its play randomly, which is a theoretically unbeatable strategy. One joke entry to the competition was carefully designed to beat the random baseline ... by reversing the state of the random number generator and then predicting with 100% accuracy what the random player would play.

Edit: the random-reversing bot was "Nostradamus" by Tim Dierks, which was declared the winner of the "supermodified" class of programs in the First International RoShamBo Programming Competition. [2]

[1] https://web.archive.org/web/20180719050311/http://webdocs.cs...

[2] https://groups.google.com/g/comp.ai.games/c/qvJqOLOg-oc


We (Nepali) have been using this material to make lokta paper for a long time now. These papers (Nepali Kaagaz) are used mainly today for official documents.

Funny story. Before pivoting my startup to Loom, we were a user testing company named Opentest. Instead of spinning up a DB and creating a dashboard for my co-founders to look at who requested certain user tests, I just dumped everything into a Google Sheet. It was so good. No downtime. Open access. Only 3 people looking/editing, so no conflict. Didn't have to deal with database upgrades or maintenance. I often think about this decision and feel like I've learned a bunch of "good engineering practices" that pale in comparison to how being truly scrappy can be a genius unlock at any level.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: