Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am also a bit surprised they chose a bechmark that includes network latency as well. Also the 1.5s NodeJS cold-start seems quite high and is not what I would expect at all. Especially when looking at https://maxday.github.io/lambda-perf/

The SDKs are "bundled" in all lambda runtimes but normally not into the binary, what additional performance would that bring?



> The SDKs are "bundled" in all lambda runtimes but normally not into the binary, what additional performance would that bring?

I haven't looked at LLRT's internals, but if I were them, and I were bundling some JavaScript code into a binary and seeking to really optimize startup time, I would probably pre-parse the JavaScript text to produce QuickJS bytecode (or whatever data structures QuickJS actually interprets at runtime; no modern interpreter is actually processing raw text as it goes). In the best case, embedding something like that into the binary could mean that startup processing of the embedded code is O(1) (just like how startup time for a native-code binary is independent of its size, as long as it doesn't have global constructors).


From all that I have read, file-size is a major part of cold-starts as your zip needs to be fetched internally first so there is some app size where increasing the files by pre-parsing might make it slower and only after a certain size would it make sense. Edit: Sorry, you were talking about the runtime, not the app bundle, just realized this now. If it‘s in the runtime it probably is free as that will probably be available in all lambda hosts I guess. Lambda already has an even more aggressive optimization than just pre-parsing, which they call SnapStart; where it takes a RAM snapshot after init and loads that afterwards but I think it‘s inly available in some regions and some runtimes like Java with very slow cold start times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: