Interesting that the article skirts around the part where they imply they’ll lift the hold of and when he gives them the ip/works to build a solution for them unpaid.
A quick search on planhub.ca for 60GB with BYOD and unlimited calls in Canada is $35. That's 8x your cost, and that's the minimum, with obscure telcos that piggyback on others' networks, so quality is not great. The Big 3 are more around $45-50.
It's often cheaper to buy an European eSIM and roam, it's laughable and has been stifling innovation for decades. But hey, someone's getting rich.
I was surprised to see swift support in the blog entry for 1.0. That’s exciting! Every other CRDT I’ve seen doesn’t support (or has very unfinished support for swift, making it really hard to use them in native iOS programming.
I was wondering though, is there any kind of transfer mechanism built into this, what yjs calls “providers”? I mean a built in way to communicate between clients. Or is that on the developer to come up with?
Here’s my experience. Like some of the other responses here to your comment, nothing I’ve made that’s more than a few lines of code has worked after one prompt, or even two or three. An example of something I’m working at the moment is here: https://github.com/fivestones/family-organizer. That codebase is about 99% LLM generated. I’d say it’s 60% from chatgpt 4o, 30% Claude Sonnet 3.5, and the rest mostly chatgpt o1-preview. Just the last commit has a bit of Claude Sonnet 3.5-new.
I can send you my chat transcripts if it would be helpful but it would take some work since it’s scattered over lots of different conversations.
At the beginning I was trying to describe the whole project to the LLM and then ask it to implement one feature. After maybe 5-20 prompts and iterations back and forth, I’d have something I was happy with for that feature and would move on to the next. However, I found, like some others here, the model would get bogged down in mistakes it had made previously, or would forget what I told it originally, or just wouldn’t work as well the longer my conversation went. So what I switched to, which seems to work really well, is to just paste in my entire current codebase (or at least all the relevant files) into a fresh chat, and then tell it about the one new feature I wanted. I try to focus on adding new features, or on fixing a specific problem. I’ll then sometimes (especially for a new feature) explain that this is my current code, here is the new thing I’m wanting it to do, and then tell it not to write any code for me but instead to ask me any questions it has. After this I’ll answer all its questions and tell it to ask me any follow up questions it has. “If you don’t have any more questions just say “I’m ready”. When it gets to the point of saying “I’m ready”, if working with chatgpt I would change the model from 4o to o1-preview, and then just say, “ok, go ahead”. After it spits out its response, it usually takes some iteration in the same chat: me copying and pasting code into vs code, running it, copy pasting any errors back to the LLM, or describing to it what I didn’t like about the results, and repeating. I might go through that process 5-10 times for something small, or 20-25 times for something bigger. Once I’ve gotten something working, I’ll abandon that chat and start over in a new one with my next problem or desired feature.
I basically have done nothing at all with telling it anything about how I want it to structure the code. For the project above I wanted to use instantdb so I fed it some of the instantdb documentation and examples at the beginning. Later features just worked—it followed along successfully with what it saw in my codebase already. I am also using typescript/next.js and so those were pretty much the limits of what I’ve told it as constraints as to how to structure the code.
I’m not a programmer, and I think if you look at the code you’ll probably see lots of stuff that looks bad to you if you are a programmer. But I don’t have plans to reply this code at scale—it’s just something I’m making for my family to use, and for whoever finds it on GitHub to use as well. So as long as it works and I’m happy with the result I’m not too concerned about the code. The most concern I have might be things like thinking about future features I want to add and whether or not the code I’m adding now will make future code hard to add or not. Usually I’ll just tell the LLM something like, “keep in mind when making this db schema that later we’ll need to do x or y”, and leaving it at that. The other thing is that I’ve never used react let alone next.js and have only dabbled in js here and there. But here I am, making something that works and that I’m happy with, thanks to the LLMs. I think that’s pretty amazing to me.
Sometimes I struggle to get it to do what I want, and usually then I just scrap the latest code changes back to the last commit and then start over, often with a different LLM model.
It sounds like your use case is a lot different than mine, as I’m just doing stuff in my spare time for fun and for me or my family to use. But maybe some of those ideas will help you. Let me know if you want some chat transcripts.
One other thing, I found a vs code extension that lets me choose a file or set of files in the vs code explorer, right click, and export for LLM consumption. This is really helpful. It just makes a tree of whichever files I had selected (like the output from the terminal tree command) and follow that with the full text of each file, and copies all that to the clipboard. So to start a new chat, I just select files, right click, export for LLM, and then paste into the LLM new char window.
So it if I already have an app using instantdb, how hard would it be to switch to jazz? Does it work in a very similar way? Or would it likely require a lot of code modification?
reply