Hacker News new | past | comments | ask | show | jobs | submit | cirwin's comments login

For me (maintainer of Zed's vim mode) it comes down to a few things:

1. LSPs differ per-language, and so I'm never sure whether I'll get lucky today or not. It's more reliable for small changes to talk about them in terms of the text.

2. LSPs are also quite slow. For example in Zed I can do a quick local rename with `ga` to multi-cursor onto matching words and then `s new_name` to change it. (For larger or cross-file renames I still use the LSP).

3. I err as a human continually, for example in Rust a string is `"a"` and a char is `'a'`. It's easy for my javascript addled brain to use the wrong quotes. I don't know of any LSP operation that does "convert string literal into char literal" (or vice versa), but in vim, it's easy.

We are slowly pulling in support for various vim plugins; but the tail is long and I am not likely to build a vim-compatible Lua (or VimScript :D) API any time soon.

For example, most of vim-surround already works so you could get the most used parts of mini.surround to work with just a few keybindings `"s a":["vim::PushOperator", { "AddSurrounds": {} }]`, but rebuilding every plugin is a labor of love :D.


I’ve been working on a configuration format [1] that looks surprisingly similar to this!

That said, the expectation in CONL is that the entire structure is one document. A separate syntax for multiline comments also enables nice things like syntax highlighting the nested portions.

I do like the purity of just a = b, but it seems harder to provide good error messages with so much flexibility

1: https://github.com/ConradIrwin/conl


Funnily enough, I've been working on solving the same problem concurrently. Though in my _very_ biased opinion; I think CONL is easier to read and write:

https://github.com/ConradIrwin/conl

  value = example

  map
    a = b

  list
    = 1
    = 2

  multiline_value = """bash
    #!/usr/bin/bash
    echo "hello world"


Agreed that this is much better than the OP. That said, my general opinion is that a whitespace-only indentation should be avoided especially in the serialization format due to the inherent ambiguity of whitespace characters and resulting human mistakes. When I designed CSON [1] I strived to make it as readable as possible without the indentation for that reason.

[1] https://github.com/lifthrasiir/cson


Nice – I like your verbatim syntax for multiline strings!

I went with indentation because a very common use-case in a configuration file is commenting out lines. Even with CSON-like comma rules, you still need to balance your {} and []s. Indentation balances itself most* of the time.


Indentations are still desired for most human tasks indeed! But you can have indentations and groupings at once, one complementing each other. As you've noticed, CSON's verbatim syntax was intentionally designed so that it remains valid without any indentation but your instinct really wants to align those lines anyway. (A similar approach can be seen in Zig verbatim strings, which seem to be designed independently from CSON and make me much more confident about this choice.)


Indentation in xᴇɴᴏɴ is optional and for readability.


I mean it can be encoded inline, including using \n in strings


I wrote something like this for contact auto completion - running in the browser we had to ensure that both tree construction was fast, and also that completion was instantaneous. So the next level of optimization is to amortize tree construction over searches (using the observation that most of the tree is never accessed)

https://github.com/superhuman/trie-ing


interesting. i made something much more stupid, but brutally effective: https://github.com/leeoniya/uFuzzy

i guess if you wanted results ordered by contact frequency you can just keep the original haystack sorted by frequency, and would get the Richard match first. or keep the frequency in an adjacent lookup and sort/pick the result afterwards.

PSA: don't use the ultra popular Fuse.js for fuzzy matching, it's super slow and has pretty terrible match quality. e.g. https://leeoniya.github.io/uFuzzy/demos/compare.html?libs=uF...


If youre interested in a TypeScript fork of this that also supports deletion, see here: https://github.com/shortwave/trie

There are also a couple of bug fixes in there, for example: https://github.com/shortwave/trie/commit/1e7045d89cc20011251...


A failure rate of ~once every two years is so tiny compared to the rate of failure introduced by other things (from human error, on up); and for many time-related things being off by a second is irrelevant (or again tiny compared to all the other sources of noise in measuring time). So it seems reasonable to me to ignore leap seconds in the vast majority of projects.

That said, modern cloud environments do hide this problem for you with leap smearing [1], which seems like the ideal fix. It'd be nice to see the world move to smeared time by default so that one-day = 86400 seconds stays consistent (as does 1 second = 10e9 nano-seconds); but the length of the smallest subdivisions of time perceived by your computer varies intentionally as well as randomly.

[1] https://developers.google.com/time/smear


This is great!

A few more exciting things are happening with file-systems in Chrome that will make this a lot better soon.

Firstly OPFS gives you a private sandboxed filesystem you can access with `await navigator.storage.getDirectory()` to avoid the permission prompt.

Secondly "Augmented OPFS" is coming to web workers, which will give you the ability to read/write partial files with `file.createSyncAccessHandle()`.

There's a demo of this working from the Chrome team here: https://github.com/rstz/emscripten-pthreadfs/tree/main/pthre...

And a more thorough write-up here: https://docs.google.com/document/d/1SmfDdmLRDo6_FoJMl5w1DVum...


This is super exiting! I did not know about the improvements, thanks for sharing!


Shameless plug, but I built https://github.com/superhuman/lrt as simply as possible, explicitly to avoid this kind of issue (which seem to plague tools in this space).


To do this well (for servers, which is the most common case) you need to keep the port open (and delay requests while recompilation is in progress) or clients can see transient errors. So it's probably not a language level concern, but a protocol level one (it's very solvable for HTTP for example).

I built https://github.com/superhuman/lrt which tries to solve this problem in a go-like way (no configuration required, minimal log noise, and reliability/simplicity as the primary design goals) for Superhuman.


We went through the same problem at Superhuman (and as I write our latest extension update has been pending review for 2 weeks, so maybe we're about to hit it again).

Simeon on the mailing list was quite re-assuring, and I would recommend reaching out to him, though there are limits to what he can help with.

That said we found that the review process is quite arbitrary, resubmitting may work simply because you get a different reviewer. (We've seen identical copies of the extension with different version numbers where one was approved and one rejected).

We've also observed that they use some kind of automated code-analysis to tell whether or not you're making use of the permission; so you may want to check that it's obvious from the code included in the extension bundle that you need the permissions you're asking for.

We've also hypothesized that they apply different standards to extensions depending on the number of users – our staging extension (~50 users) usually gets approved quickly, but our production extension usually takes a while and is less likely to be approved. (This may just be luck of the draw coupled with arbitrariness though)


Damn that sounds like crazymaking :(

Dunno why they can't be more explicit which part of the code is the issue


Superhuman | Fullstack Engineer | San Francisco & Vancouver | Full-time |

At Superhuman, we're rebuilding the inbox from the ground up to make it extremely fast, delightful, and intelligent — you'll feel like you have superpowers.

We're looking for Fullstack Engineers who are deeply invested in building quality software to focus on building out our flagship desktop product. You'll be working heavily on everything that makes an email client tick: Storing and searching gigabytes of data in the browser; building blazingly fast, visually gorgeous user experiences; and jumping in wherever you can make the biggest impact.

• Stack: React, Golang, Postgres, Electron, Google Cloud

• Core values: Create Delight, Be Intentional, Remarkable Quality

• Growth: 20% MoM Growth, 250k+ users on wait list

• Funding: $51M+ from Andreessen Horowitz, First Round Capital, and the founders of Gmail, GitHub, Stripe, Reddit, Intercom, and AngelList

• Interview process: - Phone call with one of our founding Engineers to learn more - about team and tech challenges plus technical discussion - Onsite with Emuye Reynolds, Head of Mobile, and me (CTO)

You can apply here https://superhuman.com/roles?gh_jid=260350 or shoot me questions at cirwin@superhuman.com

– Conrad

PS: Check out our blog https://blog.superhuman.com/


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: