Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow, what a nightmare of a non-deterministic bug introducing library.

Super fun idea though, I love the concept. But I’m getting the chills imagining the havoc this could cause



Didn't someone back in the day write a library that let you import an arbitrary Python function from Github by name only? It obviously was meant as a joke, but with AIcolytes everywhere you can't really tell anymore...


There's one that loads code out of the best matching SO answer automatically https://github.com/drathier/stack-overflow-import


Why not go further? Just expose a shell to the internet and let them do the coding work for you /s


It's not really something to be sarcastic about.

I've actually done this, setting aside a virtual machine specifically for the purpose, trying to move a step towards a full-blown AI agent.


Why on earth did you want to do that?


“Twitch does…”


Flask also started as an April 1st joke, in response to bottle.py but ever so slightly more sane. It gathered so much positive response, that mitsuhiko basically had to make it into a real thing, and later regretted the API choices (like global variables proxying per-request objects).


Is there somewhere I can read about those regrets?


Two days after the announcement: https://lucumr.pocoo.org/2010/4/3/april-1st-post-mortem/

I think there was another, later retrospective? Can't find it now.


I second this, I need to know more. programming lore is my jam.


It's like automatically copy-pasting code from StackOverflow, taken to the next level.


Are there any stable output large language models? Like stablediffusion does for image diffusion models.


If you use a deterministic sampling strategy for the next token (e.g., always output the token with the highest probability) then a traditional LLM should be deterministic on the same hardware/software stack.


Wouldn't seeding the RNG used to pick the next token be more configurable? How would changing the hardware/other software make a difference to what comes out of the model?


> Wouldn't seeding the RNG used to pick the next token be more configurable?

Sure, that would work.

> How would changing the hardware/other software make a difference to what comes out of the model?

Floating point arithmetic is not entirely consistent between different GPUs/TPUs/operating systems.


Deterministic is one thing, but stable to small perturbations in the input is another.


> Deterministic is one thing, but stable to small perturbations in the input is another.

Yes, and the one thing that was asked about was "deterministic" not "stable to small perturbations in the input.


This looks "fun" too: commit fixing a small typo -> the app broke.


So nothing's changed, then :D


Sounds like a fun way to learn effective debugging.


It imports the bugs as well. No human involvement needed. Automagically.


I mean, we're at the very early stages of code generation.

Like self-driving cars and human drivers, there will be a point in the future when LLM-generated code is less buggy than human-generated code.


That's a compiler with more steps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: