Hacker Newsnew | past | comments | ask | show | jobs | submit | julvo's commentslogin

Do you store code on GitHub? If so, how is GH's guarantee to not use your code different from Cursor's (with privacy mode enabled)?


No I definitely don't use GitHub. Everything is entirely in-house.

But even if I did, there's a much more solid foundation of trust there, whereas these AI companies have been very shady with their 'better to ask for forgiveness, than permission' attitudes of late.


All the model providers have offerings that promise not to train on your code. Can you trust them not to do it anyway? Maybe not. What's the actual damage if they did? You have to balance the expected productivity loss from forgoing the use of these tools with the risk that comes from sharing your code with them. You may want to reevaluate that somewhat frequently. I think there is a tendency in some circles to be a little bit to precious with their code.


Fair enough. In that case small models like devstral [1] are probably your best bet

[1] https://mistral.ai/news/devstral


Cursor has no privacy mode whatsoever. I have been able to exfiltrate just about anything from it.


Do you commit your client code to Github?


Looks great! What's your experience of using this for working on real world production code?


60% of the time, it works every time


There's a pricing link at the top of the page, we should probably try to make this more visible: https://www.wondercraft.ai/pricing


You have an accessibility issue involving large fonts that makes the pricing impossible to see. To replicate set your font size to 20 and minimum font size to 18. Should have the same effect in both FF and Chrome. Note that that's not even very big as far as low vision needs go.

screenshot here: https://share.cleanshot.com/kmXRc3ng

NOTE: using zoom "command +" is NOT the same and does not constitute a valid test for this.


Thank you for flagging and sorry about this. Fixing asap, but please use this to access for now https://www.wondercraft.ai/pricing


wouldn't salting and hashing be enough for this use case if you keep the salt on the client?


Or even a bloom filter?


Not saying that LLMs are particularly data efficient, but in their defence, they don't only learn what a human learns in a lifetime but throughout the whole evolution. There may be information encoded in our genes that LLMs need to learn from the training data


For sure a human baby has lots of initial structure that ChatGPT is lacking. Skinner made the point that operant conditioning and evolution are abstractly very similar processes. If this were the case then it would make sense to think of learning as a process that takes place both within individual human lifespans and over the course of evolutionary history. In fact, on almost anyone's account these days (including that of proponents of LLMs), learning is not very much like operant conditioning. Thus the analogy breaks down, and one can't excuse the amount of data that ChatGPT requires by hand waving about how this is just the equivalent of the 'learning' that a human baby got indirectly via evolution.


Yes 100%. Using the vim bindings for VS Code and that works great. Makes working without touching the mouse really enjoyable. Also being familiar with vim is a great fallback option if you're sshed into a remote machine.


> Now we're creating tools to jump directly to the final product

The same could've been said when the printing press or highly automated manufacturing was invented


Looks amazing! Interesting to see the fully integrated approach. Working on gitpaper.com, which takes the bring-your-own-static-site-generator (byossg TM) route.

Really like the idea of keeping content in git, especially for smaller projects.


Time will tell if we need symbolic representations or if continuous ones are sufficient. In the meantime, it would be more productive to present alternative methods or at least benchmarks where deep learning models are outperformed, instead of arguing about who said what first and criticising without offering quantitative evidence or alternatives


To add to the list of very handy footguns:

@reloading to hot-reload a function from source before every invocation (https://github.com/julvo/reloading)


That looks amazing, thank you! I wouldn't want it in my production code but it seems like it would be great for something that I'm working on in the REPL.


if you are using a REPL, first you should be using ipython. Second you should be running:

%load_ext autoreload %autoreload 2

now all your imports will be automatically reloaded.

So good that I have it in ipython startup scripts


There's also reloadium [0] for tight dev iteration loops

https://reloadium.io


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: