As soon as I saw the screenshot, I instantly knew it was using Dear Imgui library [1] under the hood because of the familiar Imgui design cues. Dear Imgui is originally meant to be used in combination with game & graphics engines which support hardware acceleration. Thus, you get high framerates.
It's actively developed, FOSS (LGPL), native C++ (Qt), runs on Windows/macOS/Linux, supports go.mod, and uses gocode/gotools for intellisense instead of gopls. It has integrated debugging, go to definition/usages, and some refactoring support.
LiteIDE struggles with scrolling on a top-of-the-line MacBook, even for short files. I guess it's using QT or some other non-native UI toolkit which is not ideal.
I was hoping they were going to take my money (when I saw the "Join Beta"), unfortunately it is Windows only.
Personally I know only one person working in Go on Windows, I know 40+ either on Linux or MacOS.
I had my credit card out at "Vim Keybindings", only to be slightly disappointed by the Windows-only limitation (for now). Looking forward to try this out on Linux soon. Although I'm pretty happy with Vim, I'm curious to see if I'm missing out on a significantly better experience.
That being said, not a fan of a monthly subscription for a native desktop application.
For desktop applications I just translate the monthly subscriptions into yearly. Most software right now is sold as either yearly subscription or "year of upgrades" so it is one and the same.
If I can't use the software without an active subscription, it is definitely not one and the same. I like Sublime Text's license of 3 years of upgrade with a purchase, no limitation on usage of the software. Unclear (AFAICT from the FAQ) how it would work for CodePerfect 95.
I have some applications that are years old without upgrading to current version. Reason - the old version works fine and does everything I need. However if that's the case I sometimes decide to upgrade at some point to feed the developer.
This is a compelling product pitch to me - I alternate between using vim but not having everything memorized, and using GoLand for its autocomplete and various correction/detection/auto-import-fixing conveniences but wishing it were faster.
I'd be willing to pay $5/month for a beta version, if I were using windows. Or alternatively, if someone is watching, I'd be willing to pay $5 for a trial of a mac version :) Respect it's quite a bit of work to support a second OS and window manager and you have to start somewhere - Hope it goes well and makes its way over to Mac soon!
> We charge a monthly subscription fee for you to use the IDE. Right now that fee is $5/mo.
Why should I pay $5/month for this instead of just coughing up the cash for CLIon? Or, better yet, why wouldn't I just save that money for a year or two and buy a GPU for smoother scrolling? I love lightweight software, but this seems like putting the cart before the horse on so many levels...
$5 a month, $60 a year. It's not horrible pricing, but it's competing with GoLand at that price point, and GoLand is fully featured in a way I don't see this ever being.
You're also competing with the LiteIDE. It's free, light as hell, and as far as I can tell does everything this does probably as lightly as this does.
Beyond that, I'm sure for the thousandth time you've heard it "why does everything need to be a service these days?" Har har, but seriously why not give this a reasonable 1995 pricing model if you're playing up the 95 angle for Christ's sake.
Whenever I see " per month " accompanying the price I just close the page. I only use non perpetual software licenses when required and paid for by clients of mine.
A subscription is very valid for continuous usage, but everyone who's just taking a look into go from time to time is effectively excluded.
Why not just count the usages and then ask for a subscription?
I really hate the "all or nothing" mentality.
Sure do whatever. I'll just ignore the product. I really do not care about this "valid for continuous usage". You do what suits you and I do what suits me.
Subscriptions are death by a thousand cuts. A number of years ago I cut all subscriptions after doing a self-audit and discovering just how much these things were bleeding from me. Now I won't even touch a subscription service anymore unless there's no alternative.
I STILL get the odd $0.23 subscription charge for some AWS thing I haven't used in a decade, and I can't figure out where it's even coming from.
I had this once, I ended up finding something in a region I don’t normally use.
I have also canceled an aws account over this which I would not recommend as they require you to go through support to reactivate. Doable but annoying.
Goland is slow. It's such a drag when you accidentally open it.
So many times I abandoned trying something because of the thought of opening goland.
Thankfully I got sublime.
I'll give this one a try too.
It's really a reasonable cost. They're probably a solo developer or at best a small team and any subscription model will make that vastly more sustainable.
the problem is that price isn't justified by the size or character of the team but by value added for the consumer, compared to other products in the same space.
60 bucks per year for a language specific IDE which has multiple competitors with more features just doesn't seem like that good of a deal. Others have pointed out LiteIDE, which is open source and doesn't cost anything.
I've never managed to get go support working on vscode. Even the devs themselves couldn't help me get it working (on Mac or on Ubuntu). I basically get a slow text editor with no code insight, and no debug support. LiteIDE works most of the time, though.
I’m curious what you have weird with your setup. In my experience you add the extension, it asks to install some tools, and everything just magically works. I’ve had basically this experience on Linux and macOS, and most of my team uses it without issue.
Yes, that's what's so infuriating about it. I make a point of not doing anything out of the ordinary when I install things, or customizing my OS, specifically because I HATE it when this happens...
GoLand's debugger, autocompletes, and just general feature set made it very hard to watch co-workers struggle with VS Code. Ended up with everyone getting GoLand licences eventually.
When I'm coding, the frame-rate of the IDE is the least of my concerns - I'm busy thinking about what to type. Even if my IDE takes a while to start up, I'm ok with it because I don't really restart it/reboot my machine that often. In terms of ergonomics, both VSCode and Jetbrains IDEs support vi mode and work well enough to not be a show-stopper. Anyway, we definitely can do with more solutions to ensure diversity of choices so good luck to the project.
Not to be a prick but I doubt Go users will like this, they tend to prefer simple things to more complex things. Maybe retargeting this for Java/C++ users would be better.
Jokes (not really) aside, why does an IDE need to be designed for a language anyway?
It's a self-contained application using the Dear ImGUI library rather than a bloated framework, and is designed for a single language. It's not simple compared to the likes of vi, sure, but it's almost certainly many less lines of code and much more architecturally simple than just about any other (popular, in-use) IDE out there.
Because so far noöne has figured out the set of extensibility points needed for a language-agnostic one that doesn’t result in horrible bloat and sprawling API surfaces. The original language-agnostic IDE—Emacs—is probably the simplest, and it’s still quite complex. Acme[2] is interesting, but Spartan when it comes to actual IDE-ish features, and its UI always rubbed me the wrong way.
I feel this is somewhat related to the fact that noöne has figured out how to do extensible widget toolkits or scene graphs either. (Yes, Electron does in fact solve an actual problem, if in a profoundly unsatisfying way.) Actually, just a text input box that could handle the complexities of the world’s writing systems (RTL, complex font shaping, arbitrary input methods, autocorrect, all that with at least one cursor or selection) and was capable of supporting the facts vs rumors approach from FRP[1] would be a significant advancement re toolkits. As for scene graphs, I don’t actually know of any viable general approaches to assembling an interface out of independent parts (that doesn’t work by presupposing a large substrate of features in the host “shell” and providing a separate extension point for each of shortcut, menu item, toolbar button, sidebar, dialog, ...).
Actually, now that I’ve written all of that out, it seems that another way to put my second point is that the language-agnostic IDE problem is a proper superset of the composable GUIs problem, and nobody knows how to solve that one. Composable CLIs are easy—they’re called “[textual] programming”; but then composable GUIs (or TUIs, the graphics/text distinction is immaterial here) would seem to correspond to visual programming, and last I checked the latter still sucked.
Huh (not a native speaker). This seemed obviously wrong to me: it is clearly a single negative pronoun syntactically, not the determiner no modifying the animate(!) pronoun one—for example, you can’t add adjuncts: *no great one, like *every great one, *any great one, or even just *a great one, sounds like it’s about Lovecraftian Great Ones or inanimate used cars instead of the faceless but animate pronominal “one”.
Then I looked it up, and it turns out that both “no one” and “no-one” are accepted spellings for this linguistic gadget while “noone” is rejected by prescriptivists as a confusing spelling because of the double “o”, even if it would be in line with “everyone” etc. “Noöne” is admittedly funky and nonstandard, but if the no-diaeresis concatenated version is rejected because it’s confusing (though come on English, your whole spelling is nuts, why are you so stubborn on this one), then the diaeresis version should be completely acceptable.
> Because so far noöne has figured out the set of extensibility points needed for a language-agnostic one that doesn’t result in horrible bloat and sprawling API surfaces.
Rob Pike did. Acme is absurdly simple yet absurdly powerful. Because everything is text and any piece of text is executable. Combined with the plumber[0] it beats every other approach to extensibility that I've seen. You can have any "IDE-ish feature" with a plumber rule. And you don't lose simplicity.
> Emacs is probably the simplest language-agnostic IDE.
I have said a couple of words about Acme as you can see, and I’ve read the Acme and Plumber papers and a couple of intros; their Unixy take on Smalltalk’s “Do It” seemed ingenious in principle... But I have to admit that I always got turned off early enough in my attempts to actually use Acme (very shallow and superficial reason: non-primary-button drag is painful on a Macbook) that I can’t say I’ve given it an honest try. So the rest of this is only from my understanding of the manuals and might well be wildly wrong (I certainly hope to see counterexamples!).
OK, so what are the things that, to me, differentiate an IDE from “just” a programmer’s editor (mind you, I usually use the latter) but don’t seem to have an obvious solution in Acme?
- File tree with interactively collapsible subtrees (although this is not really an IDE feature, even Gedit has one). Haven’t seen an implementation, and don’t really see how one could fit into the UI paradigm without being awkward (would I have to select “collapse foo” to collapse the foo/ subtree? I’d much prefer to just click on foo).
- Jump to definition and list uses, using semantic analysis and not just text search. The former should be doable with plumbing but I don’t remember it described anywhere; the latter should be dead easy using something like cscope, but again, I haven’t seen an implementation, and also the interface would have to be a list of all matches and not in-file highlights, which is sometimes what you want and sometimes not.
- Rename identifier, again preferably non-dumb. Sure you can call out to a separate CLI tool if you don’t mind the lack of interactive feedback, but given the brilliant handling (theory, really) of multiple selections in Sam, I’m really frustrated Acme not only can’t accept multiple selection ranges from an external program, but doesn’t seem to support multiple selections at all.
- Autocompletion. While it is admittedly not essential for programming under Plan 9 (and that’s a good thing), in other environments unfamilliar libraries or horribly large APIs (looking at you, Google Apps Script) make it a necessity. As far as I can see, impossible to implement without forcing the programmer to take their hands off the keyboard.
- Debugger integration, or at least navigation through source files as the debugger steps through the program (showing variable values from the stack is good but optional). Should be implementable with a cooperating debugger, but once again not actually implemented as far as I’ve seen, and the lack of any way to highlight the current line will make it a bit unwieldy.
- VCS integration, at least a way to do `git add --interactive` (or equivalent) without going through a hundred questions in the terminal. Doesn’t really fit into how the UI works, unless I’m simply forced to edit patches manually. Even simply highlighting the dirty lines while editing would be tremendously useful, but the UI still can’t highlight lines.
- Syntax highlighting (yes, I went there). While you can indeed do without the classic syntax highlighting many people use, what I find helpful about it (even if I have to write my own syntax files) is that it can quickly highlight obvious syntactic programs that can result in pages of awful compiler spew: runaway strings, unbalanced parens inside a(n explicitly terminated) statement, etc. Jump to or highlight matching parenthesis is useful for the same reason, but I don’t consider it a must.
That’s a lot. Some of those things are possible, and many are, irritatingly, almost possible but not quite. But even if the UI admitted all of them, none of the tools that would implement them actually exist. Thus I conclude(d) that Acme is not a functional IDE, and while it is a good advance towards a more elegant “shell” (as GP called it) for one, the UI model is just a tad too restrictive in multiple directions. I suspect that trying to expand it in all of those directions, however, will ruin the elegance.
Just to be clear, this rant (I can’t seem to be able to produce anything else) is not an anti-Acme rant. Its point is not that Acme is a bad editor (it probably isn’t, it’s just that I can’t get the control scheme to click for me). Its point is that it is not a solution to the IDE bloat problem because it’s not an IDE. Perhaps we shouldn’t be solving the IDE bloat problem at all and just give up on the idea; but supposing that we want to, I don’t see how Acme accomplishes that.
P.S. I don’t use Emacs regularly either, but curiously the things for which I have used it in the past—Org mode, Agda mode, Paredit, and of course Dired—use its flexibility in such an essential way that not only are they impossible to implement atop Acme, they are probably impossible to do on top of the code editing widget in any other extensible editor except those that implement essentially the same ideology (Climacs, Edwin, et al.).
> why does an IDE need to be designed for a language anyway
People want their IDE to do all the ancillary tasks, and Go builds most of these into the core, so it's a good first target. You can be "feature complete" without complexity.
Consider the case of a C++ IDE. Which compiler do you support; gcc, clang, msvc? Which build system do you support; Bazel, make, cmake, gyp, autotools? Which C++ standard version does the syntax highlighter support; can you change it per-project? If you do change it per-project, how do you configure this?
If you pick some opinionated subset, your userbase is 0 people. If you support everything people want, you have 30 people working on it for 10 years and you still fail.
Author is trying to solve a problem that is not exist , ui Is not a problem for any programming language for the last 10 years unless you’re doing scala or something heavy that kills your machine
I mean, given the current features, I would not call it an IDE. Also the price is too much compared with GoLand (also only Windows, I don't think anyone I know uses Windows for go)
Cool name! However, I never had a situation with Goland where speed was an issue. In most cases, I'm waiting for the code to compile. Page rendering speed is an issue in things like web browsers. Besides, my main 4k monitor has a refresh rate of 30hz, so scrolling will always be janky...
For me, the killer feature of any IDE has always been integrated debugging and able to inspect the data. Integrated testing has also been another feature I cannot live without today.
I had to run a 4k screen through an adapter that could only do 30hz@4k and it was a horrible experience - I could notice the lag even when I was typing and scrolling anything was stuttering. I feel like my eyes got tired faster working on that screen. Why would you use that as a main display ?
I doubt it's going to damage your eyes. It's sample-and-hold, after all. It is just very tedious that you have to wait 33ms between when you type a key and it appears on the screen. You will notice, and you will be frustrated.
I have a 360Hz monitor and it is really amazing what the reduction in input latency feels like in the real world. All of this input lag that gets blamed on USB or bad input subsystems is really just double-buffered vsync.
I think that this article might be referring to the CRT displays (when the CRT gun would pulsate and cause visible flicker especially when the monitor was not interlaced). I do not notice any flickering with the LCD display. In fact, the text appears sharper when compared to another 4k monitor I have with a higher refresh rate... It's not the best though, but compared to what was 10-20 years ago it's amazing.
That said, it's certainly not for watching video or playing games.
Because that was the only 4k 28in I could afford. It's not the best I must say, but does the job. The monitor is massive, so I don't scroll that much anyway. I can have everything visible at once.
Maybe I'm reading too much into this - but I found it significant that the beta is starting as Windows only.
This was obviously the norm in the 90s and 2000s. Then the Mac started gaining mindshare after OS X with the dev community.
Today, I just assume a beta launch will work on a Mac.
In the last few years, it seems Microsoft has done a great job with features like WSL on Windows.
So is this a sign that Windows is regaining mindshare?
There's something funny about proudly proclaiming no GC and written in C++ for a golang IDE. It's not wrong; different tradeoffs for different uses. Just funny.
It is funny--a bit. Decades ago, I often used Perl to take a data table in some messy form and convert it into C source code, which I would combine with other C source that I wrote and compile into an executable. I thought it was ironic that I knew C well enough to write it and well enough to tell Perl how to write it, and it was still easier to tell Perl to write C than to tell C to write C.
So, Perl is obviously a much better language. Except that it turns out it's a lot easier to write Perl itself (the interpreter) in C than to write it in Perl. So Perl is better, because it's better for "writing C", and C is better, because it's better for "writing Perl".
The irony is still funny, but it is clearly based on a feeling that language goodness is one-dimensional, even when we know that any comparison of hammer and saw on a single dimension is going to miss most of what matters.
I had a cryptography professor recommend for one assignment that, instead of trying to recognize the correct plaintext by its similarity to English (an approach that yielded results that could be manually adjusted into the correct plaintext, but wasn't able to find the plaintext by itself...), I should try to disqualify incorrect plaintexts by recognizing them as non-English.
The approach there was just to ingest a long book from Project Gutenberg, record all of the trigrams (including across word boundaries), and disqualify a candidate plaintext if it had more than some low number (3, but it was a short text) of unrecognized trigrams. This worked beautifully.
For reasons I don't recall, I wrote the assignment in C. I didn't want to do the parsing of the book into a trigram table in C. So I did that separately, and emitted C code to assign to a three-dimensional character array for every trigram I found. Then my assignment was the concatenation of some code initializing the array to zeros, the emitted code adding ones to it, and, thousands of lines later, the actual decryption code.
Modern GC doesn't even preclude the possibility of developing a no-lag UI. You just have to be very careful with how you manage your data so that you don't get hit with more heavy-handed collection mechanisms (i.e. large gen2 cleanups in .NET).
Small and short-lived class instances (more ideally structs) are the key to success with low-latency applications in most GC'd languages. If you are only touching gen0/1 in a .NET5 app, you will probably find the GC impact to UI latency to be negligible. In .NET, avoiding gen2 allocations is pretty easy. The biggest thing is replacing every byte[] with Stream as feasible, so you don't send objects directly to LOH (gen2). I believe the LOH threshold for byte[] is lower than the published limit of 85k, but I cannot recall the specifics (might be closer to 10k). Regardless, you really shouldn't be holding onto chunks of memory that large if you can ever avoid it. In some cases you can cheat. Pass around the path/id to a file and then stream it from disk at the very last possible moment (i.e. copy a FileStream direct to the HttpContext.Response.Body stream). If you absolutely must allocate large objects, try to reuse them as much as possible. Act as if anything entering gen2 is in memory forever, because with enough LOH fragmentation, it might as well be.
UI is one thing, but real-time process control is a wholly different animal. If you want to write a PID in a GC language, you better be prepared to eschew any notion of allocation in the process under collection. Even if you don't allocate, there are runtime components that still will, so you will almost always have some minimum amount of noise to contend with. That said, if you only need to run a PID loop at ~10khz (100uS loop delay), you probably wont have any issues. Just make sure you put the PID loop on a high-priority thread and care for its surroundings gingerly. A Raspi4+ is actually an amazing platform for this sort of thing.
I suspect Go's GC would be particularly well-fit for this kind of application considering it typically runs in <1ms with no exceptionally large pauses (except perhaps in pathological cases).
Like C#, Go has value types, so controlling allocations should be pretty straightforward.
I had the same thought, it probably wouldn't even interfere with their stated goal of running at 144fps (and of course there would be more Go code analysis libraries available in the Go ecosystem).
I could imagine plenty of other reasons to go with C++ though - there's still going to be a higher performance ceiling, maybe they plan to support languages other than Go, maybe they want a minimal file size compiling to WASM or to use specific C++ libraries etc. It would be interesting to see the original author's thinking behind it, on the website it's mainly comparing C++ to web technologies.
I've written a $SHELL in Go which implements a lot of the same features one requires from an IDE such as low latency keyboard input, events subsystem, syntax highlighting, syntax completion, auto-completion suggestions (popups), etc and all of which needs to be context sensitive. So it's definitely possible to write an IDE in Go.
I'd wager the biggest hurdle to writing an IDE in Go isn't the language but rather the lack of mature graphics library bindings. I've seen some attempts at building GUI toolkits in Go but it's fair to say the language's strong points are still firmly in CLI and backend web development domains.
> I suspect Go's GC would be particularly well-fit for this kind of application considering it typically runs in <1ms with no exceptionally large pauses (except perhaps in pathological cases).
The raw CPU time used doesn't really matter if it doesn't cause the UI to block. It does for server applications; needing 30% more cores costs 30% more money, which you could use for something else. But for tiny programs that run on your workstation, using 1.3% CPU doesn't cost more than 1% CPU.
You do have to watch out for the case where you can't use all the cores, and GC is blocking actual computation the user wants to perform.
(Also, you're at about 4000 cores when saving 30% buys you another developer. So you have to think about when the right time to unlock that money is, and how much it costs to unlock.)
I've been writing a game in Go. The GC doesn't prevent me from getting a smooth 360 frames per second. 1ms is a substantial pause, but isn't even a missed frame.
> But for tiny programs that run on your workstation, using 1.3% CPU doesn't cost more than 1% CPU.
Especially considering that text editor state changes are dependent on user interaction—it’s not like a video games where the state changes continuously.
The focus on framerates isn’t to account for changes as the user is typing, but rather scrolling. When scrolling through text, is it smooth? Does it tear? For me, that’s what I see when a text editor refers to framerates.
Even still, that constitutes a user event, and not even one that changes the model. Those events presumably come in on the order of tens of milliseconds, so I wouldn't expect this to be a garbage collection problem?
Just as another data-point, java now has ZGC and Shenandoah low-latency GCs that basically sacrifice some throughput for better latency (read barriers instead of write), but the former can already achieve <1 ms global pauses, which is shorter than what the OS scheduler itself can cause. Of course one should never “spam” the GC needlessly, but it does work for up to quite big heap sizes.
ZGC is a very compelling option for applications that get periodic restart opportunities (or are of the cruise missile guidance computer variety).
In places where you have to use reference types, sometimes you can allocate everything you should ever need up-front and then pull from that pool of pinned instances throughout the day. Pre-allocating all of your required memory can provide more deterministic outcomes.
Soft real time GCs do well enough to replace C++ with Java in battleship and ground control missile targeting systems, as PTC, Aonix and Aicas have been doing for years.
A frame drop causes little more hazard than just an unhappy gamer.
Speaking of which, many keep forgetting that Unreal C++ and Blueprint make use of GC.
I would contend that 10kHz is pretty fast for a control loop thats doing anything complicated in a Garbage Collected language on a non real-time OS. The later issue being some what of a bigger issue than the GC. You CAN do it... some of the Basic linux real time stuff helps the situation a lot.
In .NET you can also change the GC policy at the thread level which can help a lot.
Pool Allocation (long held chunks of memory that can be manually repurposed) can help a lot, since you can write custom Pools where the Gc never knows it needs to clean up. It is a useful tool when you have big chunks of memory that need to be manipulated at a high rates.
I would say from experience that 500Hz-1kHz is a more reasonable loop time
I can write basically instant local electron code. I've never seen anything close to problematic GC. Maybe other people have issues with their app responding in 5 instead of 1ms?
Often the difference is between small personal projects where performance can easily be kept under control, whereas large production products, that have to handle all the edge cases, accessibility, work on many platforms, etc etc can be quite a bit more complex, require more people working on it, and performance gets harder to keep under control. That last 10% to can be 90% of the complexity and work.
Text editors, especially with language aware features, are far from simple.
It’s about as funny as one of the most prominent alternative TypeScript compilers being written in Go, and being performance competitive with the other one written in Rust. Equally not wrong, just… not the way I thought things would, erm, go.
Kind of surprising, yes. But also proves the point that whatever is your love language, you should still be using the best tool for the job. And the developer felt it was C++.
> What does CodePerfect 95 mean?
It's a retro name. It hearkens back to a time when software was fast and started up instantly, despite running on hardware orders of magnitude slower than we have now.
WordPerfect was a huge monster, probably one of the largest (15+ diskettes) and sluggishiest DOS apps I had back in the day.
Windows 95 was cool, but also a significant step back on snappiness compared to 3.11 or even OS/2 Warp.
There are probably much better retro-snappy names, like, don't know, CodeKick 1-2-3?
> WordPerfect was a huge monster, ... sluggishiest DOS apps
Hmmm, I don't remember it being sluggish. I worked at the university computer lab at the time and remember WP being of of the snappiest programs. It was mostly written in assembly till the later versions.
There are these things called Grice's Maxims of Conversation, and this one is called the "Maxim of Quantity." The extra unneeded information is strange because it would normally be assumed. Tom Scott's example[0] is "vegan tomato." Felt like sharing because it's kinda neat to see it described by someone in the wild.
No electron based editor sounds so good to me. However at this point I feel there is no satisfying the mob of "Why would I use this instead of VSCode?"
I am sure once you hear of "Gang of Four" design patterns [1] your immediate response would be to rage about why is one bringing an organized group of criminals to the subject of software design pattern discussion.
VSCode (or more specifically, the VSCode Go extension) can't handle a mono-repo of golang microservices. My CPU/fans go crazy opening the darn thing. With the extension disabled, VSCode is fine, but it lacks all of the "necessary" features offered in the extension.
I wonder if these performance issues apply to all language extensions that rely on a language server (implemented in VSCode or otherwise). From what I understand, since JSON [0] is used over the wire between the editor and the language server process, there's a lot of serialisation/deserialisation overhead.
Microsoft used to maintain a log inspector [1] which you could use to see the chatter between the server and client, and there was a _lot_ of chatter with a _lot_ of JSON.
LSP stuff is done asynchronously in VS code (and other editors). This doesn't slow down the main editing UI at all. It just means when you start typing and expect to see linter warnings, autocomplete suggestions, etc. it can take a bit longer if the LSP server is slow to respond.
If someone has all their CPU fans, etc. spinning up when opening a big monorepo it's probably just aggressive indexing inside the LSP server they're using. Almost certainly it's an optimization trade-off made by the LSP server author, trying to balance the common case of people typically opening smaller repos and assuming they immediately want search, etc. to be fast and ready.
There's nothing inherent to the LSP protocol or design that causes this problem--someone could build a better LSP server designed to handle large monorepos by deferring indexing, etc. until absolutely needed (but potentially with some tradeoffs like searching inside a subunit being delayed until its used). It's the same basic problem git faced with enormous repos over time slowing down and all the hacks/workarounds people have bolted on to try to limit how much of the monorepo state needs to be available at any moment.
> There's nothing inherent to the LSP protocol or design that causes this problem
Even though most of the indexing and intellisense features are done within the language server itself, there'd still be significant overhead in JSON parsing and serialisation right?
The response example for the `textDocument/definition` request [0] shows a very large snippet of JSON considering the information it conveys:
One of the benefits of the JSON standard being pretty simple is that it makes JSON is pretty efficient to parse. We're not talking ProtoBuf efficient here, but I've easily parsed files containing gigabytes of JSON.
That's true, and JSON is probably the best choice as a lowest common denominator given the protocol's intent to be language agnostic. You'd want to implement a language server for language X in X itself, and most languages have mature libraries for working with JSON.
The demo doesn't even have auto-complete for function/variable names? It's fast at doing what, typing everything by hand?
Looking at 27:41 [1] I can hear maybe 12 key strokes to add some closing parentheses. It adds an extra newline that the programmer has to delete by hand.
It does run through every node in the would-be scene graph by default, but IIRC the Dear ImGUI library caches the actual expensive toolkit, graphics, and/or window system calls for each subtree. The API is immediate-mode, but its implementation does not have to be stateless. (Cf. React.)
I love the design philosophy and I'm the target market.
I'll never add nonfree tools to my edit/build/run toolchain, however. Every single piece must be fully and freely user-modifiable. These are my work tools.
They talk about vim support being first-class in their app. Well, vim is free software, so if you're not, you're not really vim-compatible.
With all respect - I don't really see why would you you need a separate IDE when you can have nvim with native lsp + gopls. Or vscode\goland if you'd rather have an out-of-the-box solution.
Goland can be slow and laggy at times but this is the price for all those things baked into it (if you need them).
The page says the editor is "designed to run at 144hz". That's actually a pretty cool feature, as vscode seems locked to 60hz. I have to wonder, what is stopping it from going further? I have a 240hz monitor, are there any IDEs or text editors that can render that fast?
Nothing is stopping it from going faster, unless the framerate is forcibly capped by the developer. It renders the same way a framerate-independent 3D game does.
I desperately want this. I find text editors w/ basic linting to be too limited but full IDE's like Idea or even VSCode too heavy for some devices. Something in between :(. I've given up laptop development and am forced to work with my desktop until I can afford a better laptop because Idea/VSCode runs so slowly.
I bought Sublime Text last year after using it off and on since 2009 or so and I don't think I'll switch again for a long time. I figured I'd gotten $80 of use out of it over the years, anyway. The difference in performance between ST and Atom/VSCode is unreal (but expected).
It doesn't have nearly as big a community behind it, so adjusting your tooling can take time, but I'm doing mostly Rust these days and it does a fine job for me. That all might be a dealbreaker for some, though. I do know Elm support was way better in Atom, but I'm not writing that very much these days.
The goal of 144fps is laudable, but unfortunately pointless here. Keyboard latency is rarely under 20ms so 60fps is enough for smooth animations on everything. Anything beyond that will simply be wasting cycles, especially using immediate mode rendering.
>You will lose a lot of potential customers, which can kill your product especially if you're a very small entity.
Subscriptions are working for Adobe. As much as I dislike subscriptions the writing is on the wall. I wouldn't create a new non-subscription product now.
A startup with few customers is not Adobe. Adobe has a virtual monopoly in its space with few alternatives, the above is yet another IDE in a space dominated by free products.
The pitch is very appealing to me, and I really to join the beta.
However, I'm on a mac.
I understand that it's not done yet, but an option to subscribe to a mailing list or a news letter so that I'm informed when the mac version comes out would be nice.
So, uh, it's competing with vim, which is free, but they want me to pay a monthly subscription fee?
It's actually copying vim's keyboard mappings, so you know it'll be strictly worse than vim, which, again, is free.
Also I can't take its "zero lag" claim seriously if it says vim is fast. Try opening a 10 MB XML file in vim with syntax highlighting on and see if you can spot the lag.
I'm not convinced why I should even spend time investigating this.
> I know how an IDE works. All an IDE really needs to do is (a) let you edit a text buffer, (b) parse your source files, and (c) build an understanding of your code and provide you with intellisense. What's hard about that? Why is my IDE slow?
...
> The IDE is very much unfinished. ... it freezes when opening super huge directories.
So here's what's likely to happen:
1. To fix that freeze, you realize that you need to do file loading asynchronously in the background.
2. That in turn means you need to have your parsing/analysis run asynchronously so that it can analyze files as they come in.
3. But the UI that is reading analysis results is still running on the main thread. So now you need to add locking and other concurrency stuff everywhere. This takes a year of your life. At the end, your IDE is now twice as slow.
4. Also, the user still wants to be able to edit code while all this asynchrony is going on. (Otherwise, you'd just freeze like you were before.) So now your analysis engine needs to handle both concurrent reads and writes while also doing file IO.
5. At this point (especially with all that locking and other concurrency stuff), it's so complex that "re-analyze the entire program from scratch every time" is too slow. You need to be able to maintain a persistent analysis state and incrementally update as the user changes code or files get reloaded. You build a complex dependency graph system so that you can determine which analysis state must be invalidated when a line of code in one file is changed. This is another year of your life.
6. Now your analysis engine is so complex that the limiting factor for making your IDE better is developer productivity. It is incredibly hard to touch this giant ball of mutable concurrent incrementally updated state without breaking something and/or losing your sanity.
7. You eventually realize you need to architect it at a higher level. Instead of low-level threading and locking and carefully hand-authored incremental updating code (which few humans can maintain), you replace it all with a bunch of more coarse-grained persistent data structures. This takes a couple more years of your life. At the end, you get an analysis engine that you can maintain, which is good, because in the meantime, three new major versions of Go have come out and users asking you to support 17 other programming languages. But your new coarse-grained analysis engine is five times slower than the old ball of spaghetti...
8. A user files a bug saying that the IDE crashes when they try to open their 20-million line Go program. It turns out your implementation assumed you could always fit the full AST for every source file in memory. OK, time to start working on a compressed code representation....
9. Meanwhile, another user files an innocuous little bug asking why the editor doesn't support full-width characters, right-to-left languages, or emoji. You open up your beautiful, 200-line hand-written fixed-width text renderer in one tab. Then you open the Unicode spec in the other and start reading about "extended grapheme clusters", "combining characters", etc. In a third, you start reading about OpenType "multi-colored glyphs"...
I agree that IDEs could be faster than they are. There is a lot of cruft. But it's very hard to fix that by starting with OpenGL Notepad and hacking your way to IntelliJ one Git commit at a time without ever monotonically regressing perf. That's like trying to solve climate change by taking a tricycle and incrementally welding your way to a carbon-neutral container ship.
Writing a real-time code analysis engine for large-scale programs is hard. It's a big complex piece of code. Comparing it to a text editor is like comparing Tetris to World of Warcraft because they're both "games".
That being said, I completely applaud the author for making a go at it. It's hard, but not impossible, and history is written by people who had the courage to do hard things.
10. You get a request for accessibility support, because a whole team uses your lean and mean IDE, and now they want to hire a blind person. Now you get to learn about the hard-to-implement UI Automation API for Windows (disclosure: I was on the Windows accessibility team at Microsoft), or its counterparts on other platforms.
I'm planning to work on an abstraction over those platform accessibility APIs. Think of it as the GLFW or SDL of accessibility. Hopefully the kind of developer that tries to fight bloat by implementing their UI from the ground up will be willing to bend a little on this point.
Wow, that would be really great! Although there’s a lot of criticism about IMGUI not supporting accessibility (and as a result can’t replace platform-specific APIs or something like Qt), I’ve always thought this wasn’t the limitation of the paradigm itself, but it’s just that nobody’s actually tried to implement one. Such a library that meshes well with immediate mode would definitely be a godsend.
[1]: https://github.com/ocornut/imgui