Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini (github.com/synth-inc)
162 points by telenardo 2 days ago | hide | past | favorite | 57 comments
Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!

Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!

Onit is open-source! You can download a pre-built version from our website: www.getonit.ai

Or build directly from the source code: https://github.com/synth-inc/onit

We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models) Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval Customizability: Onit is your assistant. You should be able to configure it to your liking Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.

The features for V1 include: Local mode - chat with any model running locally on Ollama! No internet connection required Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI File upload - add images or files for context (bonus: Drag & drop works too!) History - revisit prior chats through the history view or with a simple up/down arrow shortcut Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)

Anticipated questions:

What data are you collecting? Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.

How to does Onit support local mode? For use local mode, run Ollama. You can get Ollama here: https://ollama.com/ Onit gets a list of your local models through Ollama’s API.

Which models do you support? For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision). For local mode, Onit supports any models you can run locally on Ollama!

What license is Onit under? We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.

Where is the monetization? We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!

Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.

Who are we? We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).

We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.

Future roadmap includes: Autocontext - automatically pull context from computer, rather than having to repeatedly upload. Local-RAG - let users index and create context from their files without uploading anything. Local-typeahead - i.e. Cursor Tab but for everywhere Additional support - add Linux/Windows, Mistral/Deepseek etc etc. (maybe) Bundle Ollama to avoid double-download And lot’s more!






> We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps.

I find this somewhat ironic, given that the software only supports Apple computers. It would have been nice for OP to mention this fact upfront in the announcement, so as not to get non-Apple users' expectations up too soon.


Addressed towards the end of the post:

"Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support."

As you can see from the commit log, we have 3 people working on this. So we're quite limited in what we can take on. That said, our belief holds and we'd love to support Linux and Windows.

I had "MacOS" in my original title, but HN limits titles to 80 characters!


It would be nice if the README made it clear toward the top that this is Mac software. The screenshot and mention of Xcode give that vibe of course, but I kept reading anyway and felt a bit bummed to only confirm at the end.

Looks like a cook project and wishing y'all the best. Let us know if and when the Linux support drops :)


You might be interested in a properly cross-platform version of this type of thing that's also on the front page just now: https://news.ycombinator.com/item?id=42789323

I can't connect to any remote models without allowing the app to connect to the domain "syntheticco.blob.core.windows.net". Blocking this prevents any API connection from functioning. Why are you sending something to Microsoft's blob storage?

We pull the list of available models from that URL! Not sending anything, just fetching so we can display an up-to-date list of models.

This the URL that we fetch: https://syntheticco.blob.core.windows.net/onit/models.json


Fair enough, thanks for the fast reply!

Nice start. There's definitely room for a good native macOS chat client, I have tried a few now and none feel perfect. I found two that feel usable:

HuggingChat (https://github.com/huggingface/chat-macOS) It has a launcher interface in the current release, and code, latex, etc are pretty printed. You can switch from HF hosted models to local mlx ones (though those are hardcoded rn i think). I like it for quick queries to qwen2.5-coder and I think it would be great if they develop it more.

Enchanted (https://github.com/gluonfield/enchanted) This one feels a bit buggy and it might be abandoned, but it has basic functionality for working with ollama models.

Also worth a mention is aichat (https://github.com/sigoden/aichat). It's not a native gui app, but it's an impressive cli client.


I've had a generally positive experience with MindMac[1], which is another native macOS app. I've raised a few issues and feature requests and the developer has been pretty responsive to feedback.

[1] https://mindmac.app/


The idea of a universal AI assistant across the desktop is cool. Like the emphasis on local processing and provider choice.

I have tried out V1 and while it's a bit barebones, the planned features like 'Autocontext' and 'Local-RAG' sound promising. Devil's in the implementation details though.


Basic polish things you might want to fix up quickly: 1) The site in the github repo description is a broken link 2) The description itself says "iOS client" which I don't understand at all and 3) The actual webpage's title is "My Framer Site".

Thanks for the feedback! Fixed these - though the site title seems to be cached...

Congratulations on the launch Tim, love the CMD + 0 shortcut and local model support. It's definitely something I'd use!

Bug report: cmd + 8 is behaving weird on my laptop, maximizing and minimizing several times (Mac OS Sonoma 14.5). Happy to provide more details!

Also, I'd expect pressing cmd + 0 to act like a toggle, to close it if it's already open (instead of needing to press escape), but maybe that's just me!

Looking forward to future work on this!


Sarp! Good to hear from you! I hope life has been good since the Instagram days. Yes, I've noticed the multi-resizing issue with cmd + 8 - I'll look into it this week. Regarding the cmd + 0 toggle, I think I can probably make that work too. We can set it up so you can set your dismiss shortcut. Then, you can choose the same as the launch shortcut, making it a toggle. I'll also take a look at that this week.

Oh no, ⌘+0 is so useful for resetting zoom level. Glad it’s (apparently) possible to change it!

It's hard to find unoccupied shortcuts these days! I don't use shortcuts on the numbers often, so I set that as a default. But yes, it's easily configurable in settings so you can choose something that works for your workflows.

Maybe add "MacOS" to the title.

This +1. Should be in the guidelines for submissions. Tired of clicking on interesting threads only to find they target on specific platform.

This is all a bit tangential, and I understand the gif is meant to be illustrative, but I think it reinforces the view that things like "summarize document" are good prompts.

People really need to learn to be more detailed when telling AI assistants what to do because they benefit from context. Saying just "summarize document" or "what does this code do" with no context is going to lead to subpar results. It would be like stopping a random person on the street and asking just that one question. Also why paste a screenshot of code instead of the actual code??

Finally your gif is 4MB which is therefore very slow to load, especially for such a short recording. Consider using a tool life gifski to reduce the size to a more appropriate size while maintaining quality. The grainy background might be hurting more than helping, unless that's a byproduct of dithering from the conversion from video to gif


I would agree!

(that this is all a bit tangential)

Do you plan to release Linux?

Or do you have a way to compile it for Linux (Debian)?


Looks like a sharp utility!

Any plans to expand into non-text generation? My roadblocks with Jan (similar, as far as I can see?), were that I couldn't run any of the image generation or 3D model generation releases, so I'd be very interested in something local that was equipped for media/file output, rather than just text output.


Love it! I'm curious about whether open router support is on the road map to really allow people to use any option

Interesting! Can I hook this up to my calendar and e-mail?

edit: s/an/and


Not yet, but if all goes according to plan, you should be able to soon!

What value does this provide over using Ollama and one of the many already available cross-platform local frontends available for it?

Such as:

- GPT4all - Lmstudio - LocalAI - Jan - KoboldAI - SillyTavern - Oobabooga - ComfyUI (now supports text) - Llama.cpp - Ollama

Many of which have very large capabilities, more permissive licenses and are very actively maintained.

And then there is also OpenwebUI which seems to be consolidating this space rapidly with a huge feature set and ecosystem of tools and models. And 'Artifacts' style IDE coming shortly.


License is disappointed. Plan to create one in Apache

OpenWebUI seems to be really popular and feature rich, has a lot of this project's roadmap already. BSD-3

https://github.com/open-webui/open-webui


I think I just blasted through an entire month's quota of "seamless" and "effortless" reading that

I wonder if the author used a seamless connection to an effortless AI?

sigh no Linux support :-(

I'd be fine with that if it had "MacOS" in the title.

I was so so excited to read this, then I saw the headline is deceptive. It's not Open Source; it uses a Creative Common "Non-Commercial" license.

CC licenses are not meant for software. They explicitly say so on their FAQ: https://creativecommons.org/faq/#can-i-apply-a-creative-comm...

And non-commercial licenses are not Open Source, period. This has been well established since the 1990s, both by the FSF and the OSI.

It's such a promising piece of software, but deceptive advertising is a bad way to start off a relationship of any sort.


Ok, we've replaced open-source with source-available in the title for now, so hopefully the discussion can get back on topic.

I would like to add that this is probably not deceptive advertising. At least not intentional deceptive as many people including me didn't know that CC licenses are not meant for software and is not considered open source. I don't know if it is common misunderstanding or not but I think there is strong case to say that some people intuitively would think so.

Yes, that's right. This was definitely not intentional and we are very open to changing it to something more appropriate!

I think the license choice is great. It allows noncommercial use, modification, and redistribution. It’s not “open source” according to the champions of the term (since it violates the use-for-any-purpose requirement) but I’m a huge fan of this license and license several of my projects CC-NC-BY where AGPL would be too heavy-handed.

BSD or MIT license would be nice.

AGPL would be better

Amazon and other cloud providers avoid AGPL, so I think it's closer to the intentions of the OP.

I think your choice is very appropriate.

And it is open source.

Probably not OSI-open source or FSF-open source but it is open source, period.


"It's not recognized as Open Source by the Open Source body, and doesn't meet the criteria of Free/Open Source Software, but is Open Source" is a bit like saying "I used GMO and petroleum based pesticides, but my produce is all organic."

But here the source is open!

Why should we restrict the meaning of Opel Source, a societal mouvement since decades, to a list of criteria that FSF or OSI decided?

Open source is not a trade mark by FSF or OSI.

OP did not say it is free/libre software, but just open source, which it is.

We don't need "source available", just open source is correct.

PS: can you define the open source body in your previous comment?


Why should words like "organic" in relation to food mean without pesticides? I mean all carbon and water based life forms are organic, right?

I can define Open Source easily, using the OSI definition.

There is not a trademark for Open Source because they failed to secure the trademark, but we have decades of use for the term meaning something specific.


It might not be, but I can't understand how someone who has written such advanced software, and includes a monetization plan, and then posts about it on HN also doesn't take the time to choose a license.

Even if they didn't know CC wasn't suitable for software, everyone knows that non-commercial isn't Open Source.

I didn't dig into the software, but I wonder if the licenses for the dependencies allow this either, eg if any are GPL or similar.


> CC wasn't suitable for software

This is wrong. CC is perfectly fine for software in some cases, such as here.

Ok, CC is not tailored specifically for software, thus the general advice "you should use something else" but I do not see why CC would not be suitable here to achieve OP's goals.

Can someone explain?


Creative Commons' FAQ addresses this

    Unlike software-specific licenses, CC licenses do 
    not contain specific terms about the distribution 
    of source code, which is often important to ensuring 
    the free reuse and modifiability of software. 
    Many software licenses also address patent rights, 
    which are important to software but may not be 
    applicable to other copyrightable works. Additionally,
    our licenses are currently not compatible with the 
    major software licenses, so it would be difficult to 
    integrate CC-licensed work with other free software. 
    Existing software licenses were designed specifically
    for use with software and offer a similar set of 
    rights to the Creative Commons licenses.

Software licenses, especially the more "advanced" licences such as the GPL, MPL, and others include very specific language around the issue of what is use, what is distribution, what is is connecting to, derived works, and importantly, around patents.

The CC licenses do an amazing job when it comes to artistic work such as books, movies, music, etc. but you don't have the same issues there, and that's why even CC says that they don't recommend using them for software.


As someone developing CC0-licensed software, this had me a bit shook, so let me highlight that your link does clarify that CC0 licenses are fine for software and are entirely separate from other CC licenses.

Relevant sub-link (from OP's link): https://wiki.creativecommons.org/wiki/CC0_FAQ#May_I_apply_CC...


> And non-commercial licenses are not Open Source, period. This has been well established since the 1990s, both by the FSF and the OSI.

That may be a bit misleading - the Free Software Foundation has long held strong opinions about the phrase 'open source'.[0]

IIRC 'open source' became formalised by the OSI around 1998 - and despite the stated intent to clarify things where arguably no clarification was needed (a lot of people felt it was not too onerous to explain the beer and speech, libre and gratis, concepts to novices) it continues to reduce clarity. Viz.

[0] https://www.gnu.org/philosophy/open-source-misses-the-point....*


I disagree.

Ok, a non-commercial Creative Common license is not "OSI-open source" or "FSF-open source", but it is technically "open source". The source is open.

The open source societal movement is much broader than the narrow definition given by OSI or FSF.

OP, your tool is perfectly fine with a non-commercial creative common license. The fact that CC licenses are not specific for software does not imply it is a bad choice for software.

Here I find it is a very appropriate license for OP's needs : he wants to open the source code, but prevent that someone else takes it and makes money with it under another name. This is totally fine.


Then say source available, not open source, because the latter connotes the freedoms as mentioned in the OSI definition, for most people who use that phrase.

Open source is not trademarked by FSF or OSI. I think it is ok to call it open source since the source is open.

Let's not redefine words based on what you personally think is correct when people en masse have been using them to mean a certain specific concept. It does not have to be trademarked, it can have a de facto meaning that everyone generally understands to be what it means.

That's because "open source" is a bad name, since it only focuses on source code availability rather than three other essential freedoms. "Free/libre software" always made more sense, but "open source" got significantly more popular.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: