Congratulations and thank you to Sid, the GitLab CEO, for building an incredible company and product.
GitLab was the first code host to add more products (CI, security, ops, helpdesk, analytics, etc.) and create a whole suite, and GitHub followed. GitLab also built for the enterprise years before GitHub started to give appropriate love to the enterprise. Some people think that GitLab is a GitHub clone. Quite the opposite!
Even if you don't use GitLab yourself, you've been a huge beneficiary of the dev workflow GitLab envisioned and created, and of the competition they've given to Microsoft/GitHub. Competition in this space makes everything better.
> GitLab was the first code host to add more products (CI, security, ops, helpdesk, analytics, etc.) and create a whole suite, and GitHub followed.
Disclaimer: I've worked with Sid and his team in the past.
Few people realize how long it's been since GitLab was a simple clone -- there has been a ton of legitimate net new innovation, and that happened under Sid (and of course all the awesome people working at GitLab).
Another thing that's actually insanely under-discussed is how openly GitLab runs and how that's been a successful model for them. I'm not sure I know another open core company that has been so successful in the space of developers who bend over backwards to pay nothing and spend hours of their own time (read $$$$$) to host their own <X>.
IMO they are the only credible competitor to GitHub, and they're open core, huge open source orgs, small companies, and large companies trust them (rightfully so), and they've built this all while being incredibly open and to this day you can still self-host their core software (which is a force multiplier for software companies).
Gitlab used to stand alone in the "Github replacement" market, but these days Gitea is quickly closing in on them. I hope the competition will drive Gitlab to continue to compete, but the switch to "AI everything" makes me weary for its future.
Without Gitlab, Github would've taken years, maybe even longer, to develop what it has become today. I don't think Gitea and its forks would exist.
Now if only Github would go the extra mile and copy another feature from Gitlab (IPv6 support)…
GitLab is currently marketing itself as the "AI-powered DevSecOps platform" which in my view ditches it's history/brand as an open and transparent alternative to GitHub.
Indeed. Github Actions runs because GitLab CI walked and Travis crawled. There's a clear evolution through line with how each laid the groundwork for the successor.
I disagree that GitHub Actions is much more powerful than GitLab. Both can be helped by a YC company, depot.dev, if you literally mean running containers quickly and reliably. GitHub Actions can be easier to set up if you like having stuff outside of your repo and an OCI image. GitLab may not have the actions library that GitHub has but it can pull docker images and that’s a powerful build library.
GitLab CI can suppress the checkout altogether, do stuff, and then trigger a downstream job.
But really that’s emblematic of the whole thing, where some particular workflow is possible but extremely awkward and hacky. You feel like you’re fighting the system and wish you were just writing whatever it is as a few lines of groovy in a Jenkinsfile.
With great power comes great responsibility, and the responsibility to maintain what started out as “a few lines of groovy” is not one I’d ever take up again.
There’s a middle ground between overly flexible and very constrained, and I think GitHub actions nails that.
Individual steps/actions are reusable components with clear interfaces, which is tied together by a simple workflow engine. This decoupling is great, and allows independent evolution.
As a point to this: GitHub actions doesn’t even offer git clone functionality: it doesn’t care about it. Everyone uses the core “GitHub/checkout” action, but there is nothing special about it.
The same for caching - the workflow/steps engine doesn’t give two shits about that. The end result of this decoupling is things like sccache and docker can offer native integrations with the cache system, because it’s a separate thing.
Ah interesting, yeah the whole container build -> CI build has been a long-standing paint point for me across Github, GitLab, and even Jenkins. I will investigate what depot.dev is doing.... cause yeah, proper and intelligent on-demand rebuilding of based containers could be a game changer.
One of the founders of Depot here. Always feel free to ping me directly (email in my bio) if you ever want to chat more about container builds in general.
For sure! I've always felt like a bit of a loner in that the assumption in most of these platforms is that your build starts with either something barebones (just apt) or maybe your platform only (python3:latest).
However, I've typically dealt with builds that have a very heavy dependency load (10-20GB) where it isn't desirable to install everything every time— I'd rather have an intermediate "deps" container that the build can start from. But I don't want to have to manually lifecycle that container; if I have a manifest of what's in my apt repo vs the current container, it should just know automatically when a container rebuild is required.
Yeah, we’re using it a lot at Sourcegraph. There are some extra APIs it offers beyond what MCP offers, such as annotations (as you can see on the homepage of https://openctx.org). We worked with Anthropic on MCP because this kind of layer benefits everyone, and we’ve already shipped interoperability.
Interesting. In Cody training sessions given by Sourcegraph, I saw OpenCtx mentioned a few times "casually", and the focus is always on Cody core concepts and features like prompt engineering and manual context etc. Sounds like for enterprise customers, setting up context is meant for infrastructure teams within the company, and end users mostly should not worry about OpenCxt?
Most users won't and shouldn't need to go through the process of adding context sources. In the enterprise, you want these to be chosen by (and pre-authed/configured by) admins, or at least not by each individual user, because that would introduce a lot of friction and inconsistency. We are still working on making that smooth, which is why we haven't been very loud about OpenCtx to end users yet.
But today we already have lots of enterprise customers building their own OpenCtx providers and/or using the `openctx.providers` global settings in Sourcegraph to configure them in the current state. OpenCtx has been quite valuable already here to our customers.
Context is a huge part of the chat experience in Cody, and we're working hard to stay ahead there as well with things like OpenCtx (https://openctx.org) and more code context based on the code graph (defs/refs/etc.). All this competition is good for everyone. :)
Cody (https://cody.dev) will have support for the new Claude 3.5 Sonnet on all tiers (including the free tier) asap. We will reply back here when it's up.
Thank you for Cody! Enjoy using it and the chat is perfect for brainstorming and iteratin. Selecting code + asking to edit it makes coding so much fun. Kinda feel like a caveman at work without it :)
The government is not telling you which headphones you can wear. They are saying that these particular headphones work well enough as a hearing aid that it is ok that market them as such. This protects you from quacks that claim their device is a hearing aid but that doesn’t actually work.
To be fair, in the case of hearing aids you are both in the right.
Excessive regulation has created oligopolies and kept prices high in the US. The OTC hearing aid category is meant to help. Before that, low-cost devices tended to remain niche.
OTOH the regulation(s) were introduced due to blatant sales of substandard devices, esp in the 1970s. A high-amplification device runs the risk of further damaging your hearing. Many hearing aid users are vulnerable elderly.
Nobody is telling anyone what kind of headphones they're allowed to wear. They do, however, tell _companies_ that they can't claim their product has medical benefits without proving (to some kind of standard) that the product is safe to use, and does what it claims to do. This system was put in place after businesses spent decades scamming the public with "medicine" that didn't do what it claimed to do and, in many cases, was also poisonous.
Not that many people use Copilot Chat, anecdotally. We've focused on codebase chat when building Cody (https://cody.dev), since we can use a lot of the code search stuff we've built before. It's hard to build, esp. cross-repo where you can't just rely on what's accessible to your editor. You can try it on the web if you sign up and then go to any repository's sidebar on https://sourcegraph.com/search, or get it for VS Code/JetBrains/etc.
What about url-defined ollama? Personally, I run open-webui on an outward facing pi on its own vlan that connects to an internal machine running ollama. This is so that there is fallback to openai api if the machine is down.
Yeah, use this in your VS Code settings to use a different Ollama URL (here it's localhost:11434 but change apiEndpoint, model, and tokens to whatever).
We should add an easier way to just change the Ollama URL from localhost, so you can see all the Ollama models listed as you can when it's available on localhost. Added to our TODO list!
When I tried Cody around half a year ago it only used Ollama for tab completion while chat still used propriety APIs (or the other way around). Did that change by now so you can prevent any API calls to third parties in the Cody config?
Yes, Cody can use Ollama for both chat and autocomplete. See https://sourcegraph.com/docs/cody/clients/install-vscode#sup.... This lets you use Cody fully offline, but it doesn't /prevent/ API calls to third parties; you are still able to select online models like Claude 3.5 Sonnet.
I have a WIP PR right now (like literally coding on it right now) making Cody support strict offline mode better (i.e., not even showing online models if you choose to be offline): https://github.com/sourcegraph/cody/pull/5221.
The editor component itself isn't what's interesting about this product... The moat is in all the stuff happening around it. I'm a long time VSCode user, including building my own extensions and building my own stuff with the Monaco editor component. Show me please, where can I get Cursor functionality in VSCode?
In other words... their access to an OpenAI API key is what makes them special? Them and what army?
This is just one of those things where I could see an investor buying the idea, but I'd actually laugh at someone paying $20/month of their own money for this. You're not even getting an IDE for your money, you're getting a version of a free program that you pay to unlock a remote feature. There are Open Source plugins for VS Code that let you bring your own backend (or use OpenAI's) and do everything you see here.
Seems to be their biggest moat is assuming that most of their customers are too lazy to care about superior alternatives.
> Show me please, where can I get Cursor functionality in VSCode?
> The editor component itself isn't what's interesting about this product... The moat is in all the stuff happening around it. I'm a long time VSCode user...
I use Sourcegraph for personal use with my private repos.
In the past, I've found actual bugs that I reported to Justin Dorfman and worked with him to get those bugs assigned to the right engineers so they got fixed before your enterprise customers can experience them.
1. Is there still a path for reporting issues now that the repo has gone private? The Github issue tracker can no longer be accessed or searched through to report bugs or figure out if a bug is known or being worked on.
2. Do you plan to get rid of "Sourcegraph Free" for on premise personal use?
Thanks for helping us make the product better! You can post any issues at https://community.sourcegraph.com. And you can also email support@sourcegraph.com (for customers or any private issues).
We will continue to have a free tier for personal use. Soon we will have a free tier of code search on Sourcegraph.com for private code.
GitLab was the first code host to add more products (CI, security, ops, helpdesk, analytics, etc.) and create a whole suite, and GitHub followed. GitLab also built for the enterprise years before GitHub started to give appropriate love to the enterprise. Some people think that GitLab is a GitHub clone. Quite the opposite!
Even if you don't use GitLab yourself, you've been a huge beneficiary of the dev workflow GitLab envisioned and created, and of the competition they've given to Microsoft/GitHub. Competition in this space makes everything better.