Hacker Newsnew | past | comments | ask | show | jobs | submit | spankalee's commentslogin

This is the first time that I've heard of LLRT, so here's a link for anyone else interested: https://github.com/awslabs/llrt

I still like Proffor's approach because by compiling to WASM we could have latency-optimized WASM runtimes (though I'm unsure what that might entail) that would benefit other languages as well.


I'm very excited by Porffor too, but a lot of what you've said here isn't correct.

> - Porffor can use typescript types to significantly improve the compilation. It's in many ways more exciting as a TS compiler.

Proffor could use types, but TypeScript's type system is very unsound and doing so could lead to serious bugs and security vulnerabilities. I haven't kept track of what Oliver's doing here lately, but I think the best and still safe thing you could do is compile an optimistic, optimized version of functions (and maybe basic blocks) based on the declared argument types, but you'd still need a type guard to fall back to the general version when the types aren't as expected.

This isn't far from what a multi-tier JIT does, and the JIT has a lot more flexibility to generate functions for the actual observed types, not just the declared types. This can be a big help when the declared types are interfaces, but in an execution you only see specific concrete types.

> or have a very simple arena allocator that works at the request level.

This isn't viable. JS semantics mean that the request handling path can generate objects that are held from outside the request's arena. You can't free them or you'd get use-after-free problems.

> - many of the restrictions that people associate with JS are due to VMs being designed to run untrusted code

This is true to some extent, but most of the restrictions are baked into the language design. JS is a single-threaded non-shared memory language by design. The lack of threads has nothing to do with security. Other sandboxed languages, famously Java, have threads. Apple experimented with multithreaded JS and it hasn't moved forward not because of security but because it breaks JS semantics. Fork is possible in JS already, because it's a VM concept, not a language concept. Low-level memory access would completely break the memory model of JS and open up even trusted code to serious bugs and security vulnerabilities.

> It is unlikely that many people would run something compiled with Porffor in a WASM runtime

Running JS in WASM is actually the thing I'm most excited about from Porffor. There are a more and more WASM runtimes, and JS is handicapped there compared to Rust. Being able to intermix JS, Rust, and Go in a single portable, secure runtime is a killer feature.


> I haven't kept track of what Oliver's doing here lately

Please do go and check up what the state of using types to inform the compiler is (I'm not incorrect)

On the area allocator, I wasn't clear enough, as stated elsewhere this was in relation to having something similar to isolates - each having a memory space that's cleaned up on exit.

Python has almost identical semantics to JS, and has threads - there is nothing in the EMCAScript standard that would prevent them.


It is absolutely true that it is unsafe to trust TypeScript types. I've chatted briefly with Oliver on socials before and he knows this. So I am a bit confused by this issue: https://github.com/CanadaHonk/porffor/issues/234 which says "presume the types are good and have been validated by the user before compiling". This is just not a thing that's possible. Types are often wrong in subtle ways. Casts throw everything out the window.

Dart had very similar issues and constraints and they couldn't do a proper AOT compiler that considered types until they made the type system sound. TypeScript can never do that and maintain compatibility with JS.

Isolates are already available as workers. The key thing is that you can't have shared memory, other wise you can get cross-Isolate references and have all the synchronization problems of threads.

And ECMAScript is simply just specified as a single-threaded language. You break it with shared-memory threads.

In JS, this always logs '4'. With threads that's not always the case.

    let x = 4;
    console.log(x);

> It is absolutely true that it is unsafe to trust TypeScript types... This is just not a thing that's possible.

Well... unsafe and impossible aren't quite the same thing. I guess this is possible if you throw out "safe" as a requirement?


No, is the right call. TypeScript has breaking changes. It would need to be standardized first.

What? All this does is strip types

Compile the packages to JavaScript before publishing. We absolutely should but be publishing TypeScript to npm.

I can understand the argument, since npm has no solution for TypeScript packages, unlike JSR:

"You publish TypeScript source, and JSR handles generating API docs, .d.ts files, and transpiling your code for cross-runtime compatibility."

Still would have been nice to have this for private packages.

This makes Deno/Bun much more attractive alternatives


JSR does that? Now that might be a good reason to move my packages over to get rid of tsup.

As an additional compat dist? Maybe. Otherwise, just leave TS as-is. It simplifies debugging and would allow things like static Hermes to work.

Why do we need to log in?

we send out an email when the tests are finished (takes about 30 mins)

That makes you sound like you are dodging the question.

i mean that we wanted an email address to send the results to when they finish.

based on comments here, i do think we should allow users to run the audit first (and provide an email address if they want us to follow up with results later).


What do any of those things have to do with empathy?

All three replaced impartial rules with empathy-driven bias.

JavaScript is one of the three core file types of the web. You can rely on it as much as HTML and CSS. I don't get the unique derision of JS compared to the other files types.

That's an understandable take in nearly all commercial and institutional contexts. But in others just involving human people, no. Many times JS does fail or isn't available. So building progressively enhanced web documents preserves utility across the spectrum of human visitors (and maintains accessibility). But if you only have a profit motive, then yes, there's no need for robust solutions. The amount of people that can't do JS well won't eat into profits or cause enough complaints to get you in trouble.

I think progressive enhancement is a cool approach to building stuff.

I also think “turn JS on” is a fairly reasonable ask these days. A lot of the web tends to break when CSS is disabled or fails, too.


If your HTML or CSS fails to load, you're going to have a hard time too. Web pages have many critical resources.

Failing to load is not a problem. Failing to execute is.

Usually because the web dev have used some new Javascript feature only $latest JS engines support. HTML and CSS if they're there they're there. Sure, there's caniuse for HTML and CSS. But they only have to load. The text/images/etc will be there. JS both has to be loaded and executed. If the later doesn't happen just right then the text and other multi-media won't be there. It's a very big difference.


Generally i agree.

I think there are a group of people who are salty that js became “the lang” for the web. Another group of people loath the framework insanity of webdev. I count myself among the ladder not the former. I equally hate all languages.

Js is heavily overused but the “web” of today is not the web of the 90s or 2010s which some people cannot get over.


No, you can't really rely on it. Welcome to Performance Inequality Gap: https://infrequently.org/2024/01/performance-inequality-gap-...

One additional thing that article fails to mention: you should not test your device in a context where it can cool itself easily. Test on your devices when they are wrapped in a blanket, and while there's another program using 100% CPU.

Your conclusion is not the same as the article you link. Js is fine but it should be used relative to the targeted use case.

That astonishingly long and researched read loses impact when it draws a primarily moral based argument in the thesis. Being fast is better for both the privileged and underprivileged.

Moral handwringing rarely moves people to action.


> Your conclusion is not the same as the article you link.

My conclusion directly derives from the article. If your app relies on Javascript, it will be non-functional/broken/unusable for a huge number of people while their devices struggle to download, unzip, parse and run your JS bundles.

BTW. It's worse with web components built with default assumptions (without bundling). Since `import` statements will cause a long waterfall as each component loads its dependencies.


    import {sum} from './sum.js' with {type: 'comptime'};
is an unfortunate abuse of the `type` import attribute. `type` is the one spec-defined attribute and it's supposed to correspond to the mime-type of the imported module, thus the two web platform supported types are "json" and "css". The mime-type of the imported file in this case is still `application/javascript`, so if this module had a type it would be "js".

It would have been better to choose a different import attribute altogether.


You’re projecting the mimetype idea from two examples but the proposal is intentionally agnostic about what type might be used for:

> This proposal does not specify behavior for any particular attribute key or value. The JSON modules proposal will specify that type: "json" must be interpreted as a JSON module, and will specify common semantics for doing so. It is expected the type attribute will be leveraged to support additional module types in future TC39 proposals as well as by hosts.


I think it was two things:

1) The original web components proponents[1] there were very heavily into "vanilla" web components. Web components are low-level APIs and just don't have the ergonomics of frameworks without a library to help. For a few elements built by a few true believers, this is ok, but when you scale it out to every component in a complex app with company-wide teams you need the ergonomics and consistency-setting of a declarative and reactive API.

2) The GitHub Next team built their work in React because they required the ergonomics to move fast and they were operating independently from the production app teams. I think the first thing of theirs to be integrated was Projects, and they could probably show that their components had much better DX. Starting from the ergonomics argument, the hiring argument would take hold.

I've seen this happen a few times where web components teams believe that the whole point of them is to not have to use dependencies, hamstring themselves by not using a helper library on that principle, and then get web components as a whole replaced for a proprietary framework in search of ergonomics. If they had used an ergonomic web component helper library, they could have stuck with them.

The irony is that these transitions are usually way easier than a framework-to-framework migration because of web component's inherent interoperability. I still consider that a plus for web components: morally and pratically, technologies should be as easy as possible to migrate away from.

[1] GitHub was so into web components that they regularly sent representatives to W3C meetings to work on them, but a lot of the original team has left the company over the last 10 years.


> Web components are low-level APIs and just don't have the ergonomics of frameworks without a library to help. For a few elements built by a few true believers, this is ok, but when you scale it out to every component in a complex app with company-wide teams you need the ergonomics and consistency-setting of a declarative and reactive API.

GitHub did have their own declarative semi-reactice webcomponent framework. It's pretty nice!

https://github.github.io/catalyst/

It not at all coincidentally bears some resemblance to the (thinner, simpler) Invoker Commands API that has shipped in HTML (they share a main author):

https://open-ui.org/components/invokers.explainer/


> 2) The GitHub Next team built their work in React because they required the ergonomics to move fast and they were operating independently from the production app teams. I think the first thing of theirs to be integrated was Projects, and they could probably show that their components had much better DX. Starting from the ergonomics argument, the hiring argument would take hold.

There's also an impression from outside that the GitHub Next team also got a large amount of pressure to dogfood LLM usage, especially in new feature building.

There seems to be a visible ouroboros/snowball effect that LLMs are causing that they were trained so much on React code that's what they know best and because that's what they know best the goal should be to move things to React and React Native to take the most advantage of LLM "speed and productivity boosts".

I'm currently a pawn that feels about to be sacrificed in a cold war happening in my own day job over how much we should move towards React to make executives and shareholders happier at our "AI all the things" strategy. Sigh.


They were, and still very much are, using web components. But they hired a team to do experiments to imagine the future of GitHub UI, and that team built everything in React. Now that team's work is being ported to the production UI.

Also as a part of their bullshit React rewrite in addition to making everything much much slower they also managed to break basic functionality like the back and forth buttons on Safari that only got fixed quite recently but for a good 9-12 month period it was impossible to use on iOS.

Genuinely whoever was a part of pushing this rewrite should have been fired.


I know you’re just expressing your frustration, but the “the person who did this should be fired” meme you’re propagating is pretty toxic. Decisions like this are never the work of one person, and even when they are, any problems you’re perceiving were traded off against other problems you’re not perceiving. And even if it was an unadulterated mistake, it’s just that… a mistake. Should you get fired for the mistakes you make?

I guess what I’m really saying is that the internet would be a better place if you and others like you dialed down the rhetoric, dialed down the outrage, and dialed up the empathy.

Thanks for listening.


Just use that thing that renders React 1000x faster, million JS or something

it has some limitations: https://old.million.dev/docs/manual-mode/block#breaking-rule... and it isn't a silver bullet on its own

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: