Hacker Newsnew | past | comments | ask | show | jobs | submit | smlacy's commentslogin

But premium customers can choose from several UI colors to customize the look!


And maybe an improved study mode?


GPT-5 is likely much cheaper to serve, and that's the "big win" here, not necessarily any improvement in output.


Yeah, this very much feels like "we have made a more efficient/scalable model and we're selling it as the new shiny but it's really just an internal optimization to reduce cost"


Significant cost reduction while providing the same performance seems pretty big to me?

Not sure why a more efficient/scalable model isn't exciting


Oh it's exciting, but not as exciting when sama pumps GPT-5 speculation and the market thinks we're a stones throw away from AGI, which it appears we're not.


Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad.

Thankfully, this may just leave more room for other open source local inference engines.


we have always been building in the open, and so is Ollama. All the core pieces of Ollama are open. There are areas where we want to be opinionated on the design to build the world we want to see.

There are areas we will make money, and I wholly believe if we follow our conscious we can create something amazing for the world while making sure we can keep it fueled to keep it going for the long term.

Some of the ideas in Turbo mode (completely optional) is to serve the users who want a faster GPU, and adding in additional capabilities like web search. We loved the experience so much that we decided to give web search to non-paid users too. (Again, it's fully optional). Now to prevent abuse and make sure our costs don't go out of hand, we require login.

Can't we all just work together and create a better world? Or does it have to be so zero sum?


I wanted to try web search to increase my privacy but it wanted to do login.

For Turbo mode I understand the need for paying but the main poing of running a local model with web search is browsing from my computer without using any LLM provider. Also I want to get rid of the latency to US servers from Europe.

If ollama can't do it, maybe a fork.


login does not mean payment. It is free to use. It costs us to perform the web search, so we want to make sure it is not subject to abuse.


I'm sorry but your words don't match your actions.


I think this offering is a perfectly reasonable option for them to make money. We all have bills to pay, and this isn't interfering with their open source project, so I don't see anything wrong with it.


> this isn't interfering with their open source project

Wait until it makes significant amounts of money. Suddenly the priorities will be different.

I don’t begrudge them wanting to make some money off it though.


You may be right, but I hope you aren't!


Their FOSS local inference service didn't go anywhere.

This isn't Anaconda, they didn't do a bait and switch to screw their core users. It isn't sinful for devs to try and earn a living.


Another perspective:

If you earn a living using something someone else built, and expect them not to earn a living, your paycheck has a limited lifetime.

“Someone” in this context could be a person, a team, or a corporate entity. Free may be temporary.


Yet. Their FOSS local inference service hasn't go anywhere ... yet.


You can build this and go build something else as well. You don't need to morph the thing you built. That's underhanded


>> Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad.

if i could have consistent and seamless local-cloud dev that would be a nice win. everyone has to write things 3x over these days depending on your garden of choice, even with langchain/llamaindex


I don't blame them. As soon as they offer a few more models available with the Turbo mode I plan on subscribing to their Turbo plan for a couple of months - a buying them a coffee, or keeping the lights on kind of thing.

The Ollama app using the signed-in-only web search tool is really pretty good.


> important and well designed open source project

It was always just a wrapper around the real well designed OSS, llama.cpp. Ollama even messes up the names of models by calling distilled models the name of the actual one, such as DeepSeek.

Ollama's engineers created Docker Desktop, and you can see how that turned out, so I don't have much faith in them to continue to stay open given what a rugpull Docker Desktop became.


I wouldn't go as far as to say that llama.cpp is "well designed" (there be demons there), but I otherwise agree with the sentiment.


I remember them pivoting from being infra.hq


It was always a company


Same, was just after a small lightweight solution where I can download, manage and run local models. Really not a fan of boarding the enshittification train ride with them.

Always had a bad feeling when they didn't give ggerganov/llama.cpp their deserved credit for making Ollama possible in the first place, if it were a true OSS project they would have, but now makes more sense through the lens of a VC-funded project looking to grab as much marketshare as possible to avoid raising awareness for alternatives in OSS projects they depend on.

Together with their new closed-source UI [1] it's time for me to switch back to llama.cpp's cli/server.

[1] https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas...


ollama is YC and VC backed, this was inevitable and not surprising.

All companies that raise outside investment follow this route.

No exceptions.

And yes this is how ollama will fall due to enshittification, for lack of a better word.


> amazingly important

Repackaging existing software while literally adding no useful functionality was always their gig.

Worst project ever.


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


[deleted]


> Repackaging existing software while literally adding no useful functionality was always their gig.

Developers continue to be blind to usability and UI/UX. Ollama lets you just install it, just install models, and go. The only other thing really like that is LM-Studio.

It's not surprising that the people behind it are Docker people. Yes you can do everything Docker does with Linux kernel and shell commands, but do you want to?

Making software usable is often many orders of magnitude more work than making software work.


> Ollama lets you just install it, just install models, and go.

So does the original llama.cpp. And you won't have to deal with mislabeled models and insane defaults out of the box.


Can it easily run as a server process in the background? To me, not having to load the LLM into memory for every single interaction is a big win of Ollama.


Yes, of course it can.


I wouldn't consider that a given at all, but apparently there's indeed `llama-server` which looks promising!

Then the only thing that's missing seems to be a canonical way for clients to instantiate that, ideally in some OS-native way (systemd, launchcd etc.), and a canonical port that they can connect to.


This is not true.

No inference engine does all of:

- Model switching

- Unload after idle

- Dynamic layer offload to CPU to avoid OOM


this can be added to llama.cpp with llama.swap currently so even without Ollama you are not far off


sorry that you feel the way you feel. :(

I'm not sure which package we use that is triggering this. My guess is llama.cpp based on what I see on social? Ollama has long shifted to using our own engine. We do use llama.cpp for legacy and backwards compatibility. I want to be clear it's not a knock on the llama.cpp project either.

There are certain features we want to build into Ollama, and we want to be opinionated on the experience we want to build.

Have you supported our past gigs before? Why not be more happy and optimistic in seeing everyone build their dreams (success or not).

If you go build a project of your dreams, I'd be supportive of it too.


> Have you supported our past gigs before?

Docker Desktop? One of the most memorable private equity rugpulls in developer tooling?

Fool me once shame on you, fool me twice shame on me


Yes everyone should just write cpp to call local LLMs obviously


Yes, but llama.cpp already comes with a ready-made OpenAI-compatible inference server.


I think people are getting hung up on the "llama.cpp" name and thinking they need to write C++ code to use it.

llama.cpp isn't (just) a C++ library/codebase -- it's a CLI application, server application (llama-server), etc.


Yes, but how did they know that before arriving?


It’s the universe governing body that any species require AGI to enter into the “circle”. You don’t get AGI, you are not advanced enough to join the group


Or - and I know this isn't a new idea by any means - perhaps AGI is the circle. Perhaps the only life that persists long enough and is robust enough to spread amongst the stars is what we would consider AI or machine intelligence, and flesh and blood beings like ourselves are only considered a necessary precursor to the real thing.


Oooh yes and this proud sibling is racing to the birth of its earth agi brother.

Fun to think about.


It is until you consider the birth of AGI may presuppose the extinction of humanity, and how aggressively we seem to be hurtling towards our own self destruction. Maybe it's an innate collective instinct common to intelligent organic life that we "breed" technology to a sufficient level then, having served our purpose, we die. That's the way it works with a lot of insects and fish.


Yep. Maybe we are the seed for REAL intelligence.

I for one welcome our machine overlords.


Yes, but it's the AGI they want to talk to. Not it's monkey brained creators


Based on current estimated trajectory, Jupiter is getting AGI before us though.


How?


... and they're delivering our membership card


Because integrating directly with very large varities of editors & environments is actually kind of hard? Everyone has their own favorite development environment, and by pulling the LLM agents into a separate area (i.e. a terminal app) then you can quickly get to "works in all environments". Additionally, this also implies "works with no dev environment at all". For example, vibe coding a simple HTML only webpage. All you need is terminal+browser.


All of the IDEs already have the AI integrations, so there's no work to do. It's not like you don't have to do the equivalent work for a TUI as an IDE for integration of a new model, it's the same config for that task.

> works with no dev environment at all

The terminal is a dev environment, my IDE has it built in. Copilot can read both the terminal and the files in my project, it even opens them and shows me the diff as it changes them. No need to switch context between where I normally code and some AI tool. These TUIs feel like the terminal version of the webapp, where I have to go back and forth between interfaces.


The words "the AI integrations" are doing some weird work there, right? Agents all have opinionated structure, which changes how effective they are at working on different kinds of problems.


By AI (model) integrations, it's mostly pointing at a URL and what name the API keys are under in ENV. Everyone has pretty much adopted MCP for agents at this point, so also very standardized integration there too


MCP is a pretty minor detail relative to all the other design decisions that go into an agent? A serious agent is not literally just glue between programs and an upstream LLM.


I love these videos and his enthusiasm for the problem space. Unfortunately, it seems to me that the progress/ideas have floundered because of concerns around monetizing intellectual property, which is a shame. If he had gone down a more RISC-V like route, I wonder if we would see more real-world prototypes and actual use cases. This type of thing seems great for microprocessor workloads.


Electric motor though?


Why? Isn't it one of the most important aspects of this product?


Just think: This company is 5 years old. That's just 1825 days, or 43800 hours, and they've created $32B of "value" in that time. That's an average rate of almost $750k/hour continuously. Incredible.


I have no idea how these corporate acquisitions are valued.

Craftsman Tools was sold to Black and Decker for $500 Million. This was and is a respected tool brand with an international presence making physical and tangible products and it is apparently worth 1/64th of Wiz.

I'm not even saying Wiz is overvalued, I don't know, I'm just not sure how they come up with these numbers.


I think the main calculus is around estimating future profits. Do they make a profit? Is it a crowded space? Is the market space growing? What assets do they have? People, land, factories, or intellectual IP? Etc etc.

I don’t know the details of either deal but it’s easy to imagine a case where Craftsman tools is just a brand in a crowded market with no special sauce. For example Sears never even made the tools, they outsourced it. Also it sold for 900m, 500m was the initial payment.


> Also it sold for 900m, 500m was the initial payment.

Yep, you're definitely right, I misread. Still less than a billion.

> I think the main calculus is around estimating future profits. Do they make a profit? Is it a crowded space? Is the market space growing? What assets do they have? People, land, factories, or intellectual IP? Etc etc.

Yeah I guess that makes enough sense, though I have to admit that sometimes it feels kind of removed from reality sometimes.


Almost... unbelievable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: