Not OP, but I think it does run Postgres as a process. However, IMHO the general use case for SQL is for external actors (humans, machines) to get access to the underlying data in a structured way. So I see a benefit for a true in-process embedding of Postgres if the process exposed a Postgres TCP/IP port 5432, etc. (Hook your software up to a query tool, a reporting interface, etc.)
Beyond that, why care whether the "embedding" involves a spawned process? It still works great for integration tests which I suspect is the main use case, and for specialized data analysis software where a spawned process is no big deal.
I do appreciate that they lead with the examples. They convey 90% of the important information. TBH, having worked with yaml just enough to get by with k8s deployments, I could immediately spot how this would be an improvement.
Yeah, I don't disagree. I'd go further and say the examples are on-point for a "human oriented" language. But the formal spec reveals how simple or complicated this language is. (And I'm also writing this from the perspective of someone who uses a programming language that does not have a HOML implementation).
Since Cap'n Web is a simplification of Cap'n Proto RPC, it would be amazing if eventually the simplification traveled back to all the languages that Cap'n Proto RPC supports (C++, etc.). Or at least could be made to be binary compatible. Regardless, this is great.
Yeah I now want to go back and redesign the Cap'n Proto RPC protocol to be based on this new design, as it accomplishes all the same features with a lot less complexity!
But it may be tough to justify when we already have working Cap'n Proto implementations speaking the existing protocol, that took a lot of work to build. Yes, the new implementations will be less work than the original, but it's still a lot of work that is essentially running-in-place.
OTOH, it might make it easier for Cap'n Proto RPC to be implemented in more languages, which might be worth it... idk.
Disclaimer: I took over maintenance of the Cap'n Proto C bindings a couple years ago.
That makes sense. There is some opportunity though since the Cap'n Proto RPC had always lacked a JavaScript RPC implementation. For example, I had always been planning on using the Cap'n Proto OCaml implementation (which had full RPC) and using one of the two mature OCaml->JavaScript frameworks to get a JavaScript implementation. Long story short: Not now, but I'd be interested in seeing if Cap'n Web can be ported to OCaml. I suspect other language communities may be interested. Promise chaining is a killer feature and was (previously) difficult to implement. Aside: Promise chaining is quite undersold on your blog post; it is co-equal to capabilities in my estimation.
I tried using the C library recently but was turned off by the lack of bounds checking. I’m not sure how anyone could reasonably accept packets over the wire which allow arbitrary memory access. Am I misunderstanding? Any hope this can be fixed?
That's just the RPC state machine -- the serialization is specified elsewhere, and the state machine is actually schema-agnostic. (Schemas are applied at the edges, when messages are actually received from the app or delivered to it.)
This is the Cap'n Web protocol, including serialization details:
Now, to be fair, Cap'n Proto has a lot of features that Cap'n Web doesn't have yet. But Cap'n Web's high-level design is actually a lot simpler.
Among other things, I merged the concepts of call-return and promise-resolve. (Which, admittedly, CapTP was doing it that way before I even designed Cap'n Proto. It was a complete mistake on my part to turn them into two separate concepts in Cap'n Proto, but it seemed to make sense at the time.)
What I'd like to do is go back and revise the Cap'n Proto protocol to use a similar design under the hood. This would make no visible difference to applications (they'd still use schemas), but the state machine would be much simpler, and easier to port to more languages.
I was trying to port Cap'n Proto to modern C# as a side project when I was unemployed, since the current implementation years old and new C# features have been released that would make it much nicer to use.
I love the no-copy serialization and object capabilities, but wow, the RPC protocol is incredibly complex, it took me a while to wrap my head around it, and I often had to refer to the C++ implementation to really get it.
Obviously C is the ultimate compiler of compilers.
But I would call Rust, Haxe and Hack production compilers. (As mentioned by sibling, Rust bootstraps itself since its early days. But that doesn't diminish that OCaml was the choice before bootstrapping.)
Most C and C++ developers take umbrage with combining them. Since C++11, and especially C++17, the languages have diverged significantly. C is still largely compatible (outside of things like uncasted malloc) since the rules are still largely valid in C++; but both have gained fairly substantial incompatibilities to each other. Writing a pure C++ application today will look nothing like a modern C app.
RAII, iterators, templates, object encapsulation, smart pointers, data ownership, etc are entrenched in C++; while C is still raw pointers, no generics (no _Generic doesn’t count), procedural, void* casting, manual malloc/free, etc.
I code in both, and enjoy each (generally for different use cases), but certainly they are significantly differing experiences.
Sure, and we also still have people coding in K&R-style C. Some people are hard to change in their ways, but that doesn't mean the community/ecosystem hasn't moved on.
> Another one is C++ "libraries" that are plain C with extern "C" blocks.
Sure, and you also see "C Libraries" that are the exact same. I don't usually judge the communities on their exceptions or extremists.
Thanks for some of the early feedback. I did a few tweaks: separated the quick start into two (one for students, one for experienced devs), moved up the examples, and duplicated some of HN summary text on the linked page. The latter because I, as an infrequent HN submitter, hadn't realized that some (maybe most?) people don't read the HN summary.
It is the OCaml language but it also isn't packaged like conventional OCaml. Please don't blame OCaml for that.
It _does_ need a primer for those unfamiliar with OCaml. I was thinking ... since this is scripting ... part of it will probably take the form of a cheat sheet for people coming from Java/C#, JavaScript and Python backgrounds. And another part of it would be how to read an OCaml expression from left to right. Other suggestions welcome.
Direct answers:
- the full explanation of why Std is repeated three times is covered over the first four sections of the first manpage "dk(1)": <https://diskuv.com/dk/help/latest/manual/dk-1/>. It will go over some of the design behind that and also introduce aliases so that "the tool `StdStd_Std.Run` can be typed as `Run`". Did I unnecessarily expose "StdStd_Std" in the introduction?
Thank you for your response. I don't feel that you unnecessarily exposed anything, just that not much explanation was given. Take a look at The Rust Programming Language's "getting started" section[1]. The installation steps are clear, the code is deconstructed and explained, and you are made familiar with the basic CLI tooling. Looking at yours, installation steps are clear, but I'm just given commands and code to paste with no explanation as to what it all does. Sure, I could go browse the man pages, but it's easier to ease into reading manual pages after a gentle beginner's explanation.
Although, given that you mentioned teaching, maybe the page is intended to be presentation-style (less information on the page, more to be explained vocally)? But you can correct me if I'm wrong.
Yes, there is always verbal instruction (not really explanation) to start with the students. For example, most students I work with don't know how to open a terminal ... they need top-down guidance with copy and paste. The most explanation they would get at the start is that a terminal is where you can copy text and see a response. Pointing them at a web page with directions (mine or others) has never worked for the vast majority of them. The Rust pages in particular... some of them would not understand that they have to press the greater than symbol (>) to go to the next page, and almost all of them would not know they had to strip $ from the commands (or have a clue what Linux is). I think the success rate would be near zero (0%) for that Rust guide without hand-holding. Of course, once they've seen how to do something, they should not need as much handholding.
So eventually we come back and redo the content ... and that becomes the time that explanations are added.
I do like the Rust doc for experienced devs though, although I'll quibble that the doc is not good for Windows users. I'm add a separate explanatory quick start for experienced devs.
I usually structure teaching the same way done in https://www.writethedocs.org/videos/eu/2017/the-four-kinds-o.... So "the Quick Walkthrough Guide will explain what dk scripts are and give you small examples to run" is simply a learning-oriented tutorial which is mostly about giving students confidence and visual feedback. And simultaneously it an explanation of nothing (the video has a great explanation for why to do that). So, I agree that an explanation of threads + Internet + cross-compilation would quite nuts, but for an experienced developer I'd expect to see a meaty example (take a look at https://ziglang.org/ for comparison).
One concrete action may be to make two distinct Quick Start guides ... one for the experienced and one for the inexperienced students though. Is that your thinking?
You are totally fine, the grandparent comment is just either needlessly nitpicking (“great is the enemy of the perfect”) or misunderstanding what the tool is supposed to be used for.
P.S. Your idea of having two distinct quick start guides (one that goes into the meaty details and another one that is just “run this command and you are good to go”) is great. But imo it is not necessary/crucial, and not having it doesn’t detract from the value proposition of your tool at all either.
> The goal is that we can play Sudoku in TypeScript while the type checker complains about mistakes. This is not about implementing a Sudoku solver.
That goal could be extended to implement a Sudoku solver in the type system. One such solver was described at https://ocamlpro.com/blog/2017_04_01_ezsudoku/ for OCaml. TLDR: Your compiler can report the solution as an error message if your language supports refutation and has enough type machinery to accurately model Sudoku.
Having said that, I don’t think a Sudoku solver implementation embedded in a type system is practical (maybe fun for educational purposes though!)
That's fair. Assuming I introduced a new macro "compactexpr" it could have been written:
[%%compactexpr
"SumArStaged";
[ arr; n ];
let sum = ref 0 in
for i = 0 to 3 do
for j = i to min (i + 3) (n - 1) do
sum := !sum + arr.(j)
done
done;
!sum]
;;
print_c "sum_ar" (module SumArConv) SumArStaged.expr;;
So that has the exact indent level of the C code. (Details: The above is formatted using the "conventional" profile for the "ocamlformat" tool)
Of course, there is a disadvantage. I had to use global variables in the above "compactexpr". Global variables mimic the C language (ie. C functions are globally-scoped or file-scoped functions) but wouldn't be a good fit when translating that code to functional languages. Additionally, the OCaml idiom `fun arr n ->` was lost, but that seems minor.
Do you and others find the above readable? (If so, perhaps it would be best to give some flexibility for the coding style.)
Beyond that, why care whether the "embedding" involves a spawned process? It still works great for integration tests which I suspect is the main use case, and for specialized data analysis software where a spawned process is no big deal.
reply