They also decided to rehost the model files in their own (closed) library/repository + store the files split into layers on disk, so you cannot easily reuse model-files between applications. I think the point is that models can share layers, I'm not sure how much space you actually save, I just know that if you use both LM Studio + Ollama you cannot share models but if you use LM Studio + llama.cpp you can share the same files between them, no need to download duplicate model weights.
Is there a way to actually write anything more complex using React hooks without shooting oneself in the foot and ending up having to debug useEffect dependency chains?
I think I like the idea of react hooks and have got better at using them... Maybe it's just me but it really feels like an abstraction that's (too?) easy to abuse in the context of SPAs.
Not standard, but hopefully worth mentioning: the thing that's clicked best for me is the docs on https://ciel-lang.org/ ("batteries included" Common Lisp image). The examples for how to use it's curated libraries matches how I try to integrate a new language into my toolbox.
Gentle Introduction to Symbolic Computation is a great book, I learned a lot. The 1990 version available for free below has aged well but you need to look elsewhere for getting setup with emacs and slime or whatever environment you want.
I began learning Common Lisp (CL) from the Common Lisp HyperSpec (CLHS): <https://www.lispworks.com/documentation/HyperSpec/Front/Cont...>. When I began learning CL about two decades ago, I did know of any other easily available source, so CLHS was my only source back then and I think it has served me well.
Take the hint: your slop hero image is damaging the rest of your page by association. If you don't want people to assume your page is garbage, don't lead with garbage.
I think lambdas work really well for doing things as a reaction.
So for example, uploading an image to s3 could trigger a job, or having them run in parallel on a job queue since they theoretically scale way better then spinning up lots of 'job processors' to deal with influx.
One of the most fringe but interesting usecases I'm aware of is it being suggested as part of co2 (and cost) reduction. The main idea is that since lambdas only run the exact amount of time needed, we don't continue running/reserving costly machine time. This is still something where research is being done and does mostly pass the responsibility to the infrastructure / PaaS solution though
The fundamental difference between functions-as-a-service and just deploying to a damn server is that you get the ability to scale very quickly.
Our app has a feature which is very demanding but used infrequently. We can split the workload and spin up dozens of lambdas instantly to provide acceptable response times. Wouldn't want to have a server over-provisioned to handle that amount of scale at all times.
All the other purported benefits (e.g. you can plug Lambda into S3 events) are just ways to do vendor lockin to yourself. And the ops-less-ness is cool but also you could deploy to a container platform for that.
I've heard they are good for prototyping things where you need to change APIs and spin up new stuff easily. It might be better than using a service where you're basically required to have devops.
That said, you can also just spin up a VPS and use systemd to manage a service at small scale. There are also other PaaS solutions in addition to "serverless/lambda".
I've heard some horror stories about "lambda" pricing as well, so I'd make sure to read into that before becoming too dependent.
Could someone give me a quick run down on what Astro is good for please? Looking at the website it seems to cater heavily towards "marketing sites, blogs, e-commerce websites". I'm wondering if this is reflected in the design and dev experience or if it could be used for "generic" websites too.
Then for backend stuff it kinda depends on how you deploy it. It could just be a single monolithic server or serverless per API route and probably everything in between.
reply