Hacker Newsnew | past | comments | ask | show | jobs | submit | maginx's commentslogin

I'm curious about the 10x via implementation in Go - couldn't it have been realized otherwise? Finding the hotspots, reimplementing them using better algorithms, if necessary move a few critical paths to native etc. Or even improving the JIT itself which might benefit all programs. Just wondering because I wouldn't think that the JIT-overhead was that much that you could gain 10x just reimplementing in Go (or C, assembly etc)... that is something I would only have expected if going from an interpreted context.


hjalsberg has explained this in some interviews. roughly 3x speed up from going native, another 3-4x speed up from being able to actually do effective multi threading


That's not my experience - I've found a handful of accepted and verified bugs in major commercial compilers and all were in the codegen/backend, and the code to be generated was quite simple. In one case it was basically an array copy in Java byte code that got erroneously translated into what was effectively a "copy until zero termination" error.


Around 10 years ago I found a JIT bug in a JDK from a big Java vendor. A new version of a web application server had been applied. The production application crashed after around 30 minutes of running, almost simultaneously on both production sites. It was an internal checksum calculation in the application that failed - an obscure error never seen before. The upgrade was rolled back immediately. I was assigned to the case and of course didn't suspect a JIT error. But within a week of investigation I started suspecting it must be (but I didn't dare tell anyone!) and I eventually managed to show this and reproduce it consistently. The vendor confirmed and made a temporary workaround via switches that disabled some new optimizations. Later a real fix was shipped.

I've also found 3-4 JavaScript JIT compiler errors in major browsers, all confirmed. I was a developer on what was for its time a quite complicated JavaScript solution, so we tended to encounter obscure JavaScript errors before others.


> Around 10 years ago I found a JIT bug in a JDK from a big Java vendor.

Was it J9? Even the remote possibility of having been a cubicle partition away from the unrolling of your story, or that I might have heard about it at lunch, or even contributed in some small way... it's strangely affirming.

If it was J9, I'm curious if you remember much about it. The options the service team would have given you may well be still around: https://github.com/eclipse-omr/omr/blob/master/compiler/cont...


I agree - I don't know what field it formally is, but computer science it is not. It is also related to information retrieval aka "Google skills", problem presentation, 'theory of mind', even management and psychology. I'm saying the latter because people often ridicule AI responses for giving bad answers that are 'too AI'. But often it is simply because not enough context-specific information was given to allow the AI to giving a more personalized response. One should compare the response to "If I had asked a random person on the internet this query, what might I have gotten". If you write "The response should be written as a <insert characteristics, context, whatever you feel is relevant>" it will deliver a much less AI. This is just as much about how you pose a problem in general, as it is about computer science.


Agreed - I've worked with PKI in many years, and know why the various systems work...in terms of "why you can decrypt again", not in terms of why it is secure (no other way to decrypt) which no one really knows. But if we assume for a moment the systems are secure, it is truly fascinating when thinking about it in abstract terms: Who would have thought, it is possible to give someone an exact recipe to follow that scramble a message to be sent... we can consider e.g. an encryption routine where the public key is inline so it is really just a standalone scrambling problem. Even though it is completely public, it can only be used to scramble and not unscramble. The sender who literally went through the steps to scramble the message, cannot undo what they just did. (the sender could have saved the original before scrambling!). And it is not because data is thrown away it can't be unscrambled - all the information is there there, fully recoverable, but only by the person who made the scrambling-recipe and there's no practical way to deduce this unscrambling recipe from the scrambling recipe.


I feel exactly the same, and have also implemented it backwards and forwards. I've thought about it in my sleep, trying to recall how it REALLY works. Happens every few years ;-) I always thought it was probably obvious to everyone else what the "magic" is.


Probably what was built was a Fusor. There's tons of instructions how to build one (https://fusor.net/board/) and seemingly there's a lot of focus on how "young" the builders of such are. Just google: fusion reactor teenager. In some of the stories it become apparent the fusor was never actually even finished but just along the way.

https://newsforkids.net/articles/2024/09/04/16-year-old-stud... https://online.kidsdiscover.com/quickread/arkansas-teen-buil... https://interestingengineering.com/energy/nuclear-fusion-rea... ...


In two places the article states that the original game had the ability to save the updated tree ("it had the ability to save progress and load it during the next run" and "It is an amazing example... of how, even with such a simple language, ... and the ability to save new questions").

The later part says the opposite - that the original implementation had "No ability to save progress" and that this is new in the C++ implementation.

I can't help but wonder (also due to other language features) if the author ran the article through an AI to 'tidy up' before posting... because I've often found ChatGPT etc. to introduce changes in meaning like this, rather than just rewriting. This is not to dismiss either the article or the power of LLM's, just a curious observation :)


True. For example, the Apple ][+ came with a demo disk full of programs for the then-new Applesoft BASIC language, and this was one of them. The questions were saved by POKEing them into arrays hard-coded into the program, allowing you to SAVE the modified program after running it.

It seemed like a neat trick at the time. There was also a crude CRUD database that worked the same way, retaining up to 50 names and phone numbers.

Actually that disk had a lot of really cool programs, now that I think about it. A biorhythm plotter, Woz's "Little Brick Out" Breakout clone, and a few other demos by luminaries like Bruce Tognazzini and Bob Bishop. And of course, ELIZA, the mother of all LLMs... only it was called FREUD for some reason.


I believe that the intention was to say "No ability to save progress _between sessions_" in the original program, whereas the C++ implementation saves to text files.

Another portion of the article says more explicitly:

  Limitations:
    Maximum number of nodes: 200.
    The structure is stored only in memory (no disk saving).


I don't think so. Consider "It didn’t just attempt to guess the chosen animal but also learned from its mistakes, adding new questions and answers to its knowledge base. What’s more, it had the ability to save progress and load it during the next run.". Data persistence across trials is already implied by the first sentence, so what would the next "What's more, ..." part refer to - it mentions "saving" and "loading"? Even if we grant "saving" to mean updating the in-memory data structure, what would "loading" refer to? Also note the later "No ability to save progress" which directly contradicts "It had the ability to save progress". These sentences, both clearly referring to the original, are in direct contraction with each other and use the exact same terms. Inspection of the code shows that it clearly only saves the memory and not to disk.


Further suggested by the imagery used being AI generated


Yes the whole tone of voice is typical of LLM's as well.


I also think search engines sometimes remove results based on subject requests - at least I've seen such notices in Google search results, that some hits were removed due to 'right to be forgotten' policies.

Unpopular opinion (it seems): I think it is OK to some extent. Not for serious crimes (violence, murder etc.) but there's an awful lot of 'lesser crimes' reported with full names where I think subjects might deserve a clean slate or where people have some right to privacy. In the extreme case, everything court-related and all infractions could be public and subject to auto-generated news, and forever searchable: traffic fines, civil cases, neighbor complaints (either way) etc. All parts of an immutable record for everyone to look up by name. I personally think that is a violation of privacy, so it has to be balanced. Maybe the best balance is not to write the names to begin with.

In Denmark where I'm from, court cases are almost always public and the subject names are read aloud as well; however the names are not listed on the court lists or in the publicly accessible version of the verdicts. In order for the media to learn the name, a journalist has to physically go and see the trial. This already prevents automation and ensures prioritization by the media. Furthermore, most news media have a policy of only writing the subject's name after a guilty verdict has been found and even then only if the verdict was of some severity (unless it is a public person). I just checked on media outlet and their policy was to only write the name in case of a custodial sentence of at least 24 months. If it weren't for such policies, even relatively small cases would be reported with full name and be searchable forever.


I think it’s very human to be curious about the cause, but it can also be inappropriate, though some ways are more harmful than others. I've seen people ask very direct questions where it seemed their real interest was to assess whether the cause posed a risk to themselves or was something they could avoid. While this reaction is natural, it is also prioritizing personal concerns over the needs of those grieving, who don't have the possibility of changing the circumstances.

Furthermore, such questions can sometimes come across—or even be—blameful. For instance, asking 'What cancer did they die of?', 'Were they a smoker?', 'Were they obese?', or 'Did they work with chemicals?' might suggest judgment or responsibility, even if that’s not the intent. And as mentioned, sometimes it actually IS the intent (trying to find a way the person caused this on themselves), even if the person asking tries to cloak this, perhaps also to themselves. This can add unnecessary pain to those grieving.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: