mosh is neat, but I've mostly switched back to good'ol SSH over Tailscale due to various rendering bugs caused by client-server mismatches as well as the lack of port forwarding.
Basically mosh attempts to synchronize the state of the terminal which is made up of character cells. It sounds simple until you realize that unicode and fancy escape sequences exist, and the behavior of the client and the server must match otherwise you get weird misalignments that are difficult to debug:
You really need those patches to have a good experience, and popular mosh clients like Blink on iOS incorporate them in their builds. However, things look wonky if you don't use the corresponding server builds, and you don't want to dig through layers of abstractions to find out why selecting lines in a specific file in neovim causes everything to become a jumbled mess every so often.
There is no end in sight for those patched to be merged upstream, no end in sight for distros to ship new versions, and no end in sight for protocol changes to make state synchronization more resilient. So, back to SSH we go...
Edit: Fixed wrong link for underline/undercurl patch
To be frank, the whole post reads like "I hate change" with no convincing argument otherwise. The author even acknowledges the very lenient ramp-up from CAB _and_ the myriad of available tooling, yet still throws his hands up.
> I am responsible for approving SSL certificates for my company. [...] I review and approve each cert. What started out as a quarterly or semi-monthly task has become a monthly-to-weekly task depending on when our certs are expiring.
I don't get the security need for manually approving renewals, and the author makes no attempt to justify this either. It may make sense for some manual process to be in place for initial issuances, as certificates are permanently added to a publicly-available ledger. And to take a step back, do you need public certs to begin with? Can you not have an internal CA? Again, the author makes no attempt to justify this, or demonstrate understanding in the post.
> email-based validation may as well not exist when we need to update a certificate for test.lab.corp.example.com because there is no webmaster@test.lab.corp.example.com.
I know that this is an example, but as a developer it would be a pain to have to go through a manual, multi-day process for my `test.lab.corp.example.com` to work. And the rest of the post seems to imply that this is actually the case at OP's org.
> Which resource-starved team will manage the client and the infrastructure it needs? It will need time to undergo code review and/or supplier review if it’s sold by a company. There will be a requirement for secrets management. There will be a need for monitoring and alerting. It’s not as painless as the certificate approval workflow I have now.
There are additional costs and new processes to be made, yes, but even from a non-technical POV this appears to be a good time to lead and take ownership.
> Any platforms that offer or include certificate management bundled with the actual services we pay for will win our business by default. [...] What is obvious to me is that my stakeholders and I are hurrying to offload certificate management to our vendors and platforms and not to our CA.
That's okay. If you hate change and don't want to take ownership, pay someone else to take ownership.
From git's perspective, jj bookmarks are just regular git branches, so you can just do `jj git push` and open a PR as usual.
However, unlike git, jj bookmarks are pinned to change IDs instead of immutable commit SHA-1s. This means that stacked PRs just work: Change something in the pr-1 bookmark, and all dependent bookmarks (pr-2, pr-3, ...) are automatically updated. A `jj git push --tracked` later and everything is pushed.
And do downstream PRs show just what changed or is the merge target against main which then just keeps accumulating differences?
This is one of the strengths I appreciate about graphite which is that the PRs are always on the preceding branch but it knows that when you go to merge it should actually really retarget and merge against main.
Yeah – the key thing here is that there is work to be done on the server, so JJ likely either needs its own forge or a GitHub App that handles managing PRs for each JJ commit.
I'm a huge fan of the JJ paradigm – this is something I'd love for us to be able to do in the future once one or both of:
- we have more bandwidth to go down this road
- JJ is popular enough that its worthwhile for us to do
That said I'd also love to see if anyone in the community comes up with an elegant GH app for this!!
As a satisfied customer of yours, the prospect of having to give up Graphite is the main thing keeping me from giving jj a try at my day job.
Ironic, since if there are a bunch of people in my boat, the lack of us in jj's user base will make it that much harder for jj to cross the "popular enough to be worth supporting" threshold.
My ideal is really just a version of `gt sync` and `gt submit` that handle updating the Graphite + Github server-side of things let you use `jj` for everything else, I think it could feel super nice. Probably not as simple as my dreams, but hopefully something we can get to with enough interest!
Github and GitLab both allow you to specify a merge target other than main and only show you the differences from the target. If that target is merged into main, they're retargeted to main.
There is definitely room for an improved forge experience that takes advantage of the additional powers of jj, but it's no worse an experience using them today than it is with git.
By any chance did you manage to get branch protection rules working neatly in this paradigm? Ideally I’d like any CI to be re-run as necessary and the branch to be automatically merged if review was approved and its base became master, but I never got a completely hands free setup working. Maybe a skill issue though.
Basically if I have five stacked PRs, and the newest four get an approval, I want everything to stay in place no merges. Then when the base (oldest) PR gets approved, I’d like the PRs to all get merged, separately , one after the other, without further interaction from me.
Does GitHub’s merge queue implementation support that?
One problem remains: jj makes it a breeze to parallelize work, but descendant changes will then end up with multiple parents. But PRs cannot target multiple target branches at once - so you cannot point them at both at once.
I mostly solve this by putting a branch on the merge commit M, then the “real” change R is a child of that. The PR is targeted to merge R into M.
As the parents of M are merged, I rebase the whole stack. When M has a single parent left, I abandon M and retarget the PR to merge R into that parent.
It requires a little babysitting, but the PR shows the diff I want it to.
Gitpatch attempts to build a Git hosting with native support for patches and commit-based review system, where each commit is its own patch. It's also smart to handle force pushes and can update or reorder patches as needed.
Only the invoked app knows whether it needs the focus in the first place. Maybe the link you clicked is supposed to initiate some background processing that does not demand your focus at all.
One issue that comes with leaving GitHub is a higher barrier to contributing. The author appears to see this as a nice filter, but it may not make sense for you. With a self-hosted forge, a new contributor will need to:
a) Sign up for an account in your forge: Do contributors really want another account? Does your captcha/email verification actually work (I've encountered ones that don't)? There are also forges that require you to ask for an account which is another hurdle.
b) Send an email: Configuring `git send-email` is alien to many contributors and may not even be doable in some corporate environments (OAuth2 with no app passwords allowed). Diverging from this is error-prone and against social norms which the contributor may not even be aware of (until they get flamed in the mailing list). You are also giving up automated CI which is a big part of the contributor feedback loop.
To be clear, going independent does indeed work for small personal projects (do not care much about contributions) as well as established ones (large incentive for new contributors to jump over hoops), and I'm fully aware that a lot of HNers do not see the need for those "niceties" provided by GitHub. But I feel that people often underestimate the barriers that they are putting up.
I believe the slightly higher barrier is a feature, and a good filter for low quality spam.
On the other hand, if I spent time and effort writing a patch for public release, I have no issues jumping through hurdles to see it published, whether I have to create an account or learning the correct incantation to git send-email. Usually the thing that stops contribution is finding the time and will to prepare a PR for review; in comparison to that effort, creating an account is trivial.
The way I see it, using a distributed VCS like git benefits from having a distributed ecosystem. Putting everything in Microsoft’s hand for them to train their commercial AI product on your code is a little reckless and short-sighted. And we could do with fewer siloes.
You go to https://code.visualstudio.com and it will appear that AI integration is the whole point too. How a thing is currently marketed != How people have been using it.
Other commenters have already provided examples for other languages, and it's the same for Rust: async functions are just regular functions that return an impl Future type. As a sync function, you can call a bunch of async functions and return the futures to your caller to handle, or you can block your current thread with the block_on function typically available through a handle (similar to the Io object here) provided by your favorite async runtime [0].
In other words, you don't need such an Io object upfront: You need it when you want to actually drive its execution and get the result. From this perspective, the Zig approach is actually less flexible than Rust.
The point is that people want to fund the development of the actual browser engine which is more important than the customization scripts that those forks maintain. The engine is what people are worried about.
reply