Inspired by the recent post to describe relativity in words of 4 letters or less, I asked ChatGPT to do it for other things like Gravity. It couldn't help but throw in a couple 5 letter words (usually plurals). Same with Claude. So this could be a good one?
I think any software engineer can identify with the feeling you get at the moment you do the first run of the solution you have implemented that you are 100% sure has to fix it only to find nothing has changed.
Corollary: the relief/anguish when you discover that the reason none of your fixes have worked, nor your debugging print statements produced output, is because you were editing a different copy of the file than was getting built/run because you moved or renamed something and your editor didn't notice.
This reminds me of when I was trying to do Minecraft style chunking in Bevy. I was in a situation where (instead of doing the not-so-obvious fix) I threw parallelization, compiler optimization, caching, release flags etc. at my project and nothing made it go faster. I could not figure out why it was so slow. Turns out what I was doing was so unoptimized that I might've as well loaded the whole world per frame.
I was genuinely concerned that everything I was doing with mango and Pixman was going to turn out to be pointless. It wasn't, thankfully, there was a noticeable difference after introducing them. But it was a gamble for sure, because there was no smaller test I could really do to know it was worth it in advance - if I wanted to replace that DLL, I was going to have to replace the whole DLL because it was C++, the DLL exports were all mangled names for classes that all kind of interacted with each other, so I couldn't just cleanly replace one call and see that it was a good idea. I try to gather as much evidence as I can to back the idea it'll work before I make the leap, but I've learned that if you really want to get stuff done sometimes you just have to go for it and assume there is a way to salvage it if it fails
This has happened to me so many times. Especially in the distributed database I work on ... "hmm maybe I need to let the experiment run for longer, this data is noisy so it probably needs more time to show a trend line".
My father grew up in a somewhat rural Irish village and there was one farmer who would take his horse and cart to the pub (fairly anachronistic even in his day) in the knowledge that no matter how passed-out drunk he got the other patrons would load him into the cart and the horse would take him home. Take that, self-driving cars!
I read that as him trying to move his money to another bank that would allow him to make the transfer. His current bank suspected this and wouldn't let him even close his account. So they confiscated his money to prevent someone else supposedly stealing his money - pretty Kafkaesque I think
It's a bit of a fringe theory but there's a suggestion that the human 'alliance' with wolves gave us the edge over Neanderthals and other predators and ensured that it was us who ultimately survived as a species. It's a nice thought for a dog lover.
Why wouldn't neanderthals form an alliance with wolves too? Especially considering Neanderthals had a multi-hundred-thousand year head start in wolf range compared to homo sapiens.
It’s an interesting question. I don’t know if there’s any evidence of wolf domestication by Neanderthals. If they didn’t domesticate them, it would be interesting to try to work out why – maybe there’s a subtle difference in psychology between H. Sapiens and H. Neanderthalensis that enabled us to bridge that gap but not them?
There isn't a whole lot of evidence for how Neanderthals lived. We have only discovered remnants of 400 Neanderthals (about 30 mostly-complete skeletons).
Again we are in the realms of speculation upon speculation, but Neanderthals didn't have sclera (whites of the eyes) which according to the co-operative eye hypothesis as regards to domesticated hunting dogs allows them to follow our gaze. It does seem odd that Neanderthals didn't try to domesticate them too - surely the first reaction on seeing humans and dogs bring down a mammoth or corral large deer would be 'got to get us some of that', but as sibling comments say we don't know much about them really.
Character-level operations are difficult for LLMs. Because of tokenization they don't really "perceive" strings as a list of characters. There are LLMs that ingest bytes, but they are intended to process binary data.
I had an idea like this for helping introverts with icebreaking small talk. Flash style cards for each person, with info on what you spoke about last time, and a pre-prepared opener for the next time you bump into them. With the card info being updated each time you meet them.
Exactly! I've found I'm a lot more talkative and I appear to be more of a fast thinker with this approach. I have about 15 subjects (outside of coding - sports, wine, pop culture, national parks, current music, popular fiction, tv shows, movies) that I try to be knowledgable on and the flashcards help
Very cool. I've no idea if this is possible without rooting the Kindle, but I wonder would it be possible to highlight sections of text in a book and send it to something like this with a request to summarise/explain?
It's a good idea! Amazon doesn't offer an API for fetching highlights but services like Readwise seem to get around this by using a Chrome extension to do it. Not sure how that could be done on a Kindle though...
This is very cool. I wonder if we are going to have to get used to a new paradigm in software, where you have tools that are incredibly powerful and that you just accept that sometimes it 'gets it wrong'. There's no debugging, no root cause analysis, just a shrug of the shoulders and 'sometimes it gets it wrong mate, what're you gonna do?'. This is probably the mental model most laypeople have of software already I suspect but for software engineers it's somewhat of a shift. Bit of a deal with the devil, perhaps.
tools that are incredibly powerful and that you just accept that sometimes it 'gets it wrong'. There's no debugging, no root cause analysis, just a shrug of the shoulders and 'sometimes it gets it wrong mate, what're you gonna do?'.
So, pretty much what we have now with the vast majority of mega-tech companies with zero customer service. Plus all the growth-hack startups playing "monkey see, monkey do."
The difference is that you can throw a senior engineer at a bug and know that the issue can be root caused and fixed because the behavior is "fundamentally deterministic", whereas with AI for the foreseeable future all you can do is maybe tweak the model and pray.
> There's no debugging, no root cause analysis, just a shrug of the shoulders and 'sometimes it gets it wrong mate, what're you gonna do?'
This been the case for as long as I can remember, seems to have more to do with individual developers typical methodology rather than the tools available.
I remember a bunch of issues with early npm versions were resolved by deleting the node_modules directory and running `npm install` again. Sometimes it borked the directory, sometimes it didn't, deleting everything and beginning from the beginning resolved many of those issues.
You are absolutely right of course, most day to day bugs we don't have either the inclination, time or knowledge to root cause much less fix, but I feel it's a comfort to know that you (or someone) could.
Well, as in many cases, I think it depends. There are some situations where you can accept errors, but in some others definitely not: imagine you're trying to delete a set of files from a directory that contains other files you're interested in. If the AI commits a mistake and deletes some of the other files you will be disappointed. Now, you should have a backup. But what if you had the AI assistant come up with the backup command for you and by mistake it didn't include that directory?
I wonder how we might design for such a system. I'd say as a starting point any action should be undo-able. Then you give it a go to see if it works, and if it didn't you can always get back.
I've read this is good practice in any system, as a user can get inured to 'are you sure' dialogs and just click through them reflexively.
reply