I think a 'linear types' ish model where the compiler flags an error if you didn't write the explicit unlock call, and only compiles if some acceptable unlock call (or set of calls) is added, would be a good design. It can / would also prevent use-after-consume. I do want the static checks, but I don't think that means that implicit function calls need to be generated on a } or a = (assigning to a variable causes a drop) etc. This is what Rust already does with `.clone()` -- needing the explicit call -- and I think it makes sense for a lot of cases of implicit drop. I have seen discussions about implementing this for Rust and I have it implemented in a C borrow checker experiment I'm trying. ATS is also an example of this, and going further is something like Frama-C or seL4 and so on.
The main point being: the implicitness is not necessary for "compiler makes sure you don't forget". So the original comment about how usage of the explicitly named and paired APIs can clarify intent both for the writer and reader can still stand while not implying that forgetting is involved. I see this dichotomy often being drawn and I think it's important to consider the language design space more granularly. I think a reminder is a better fix for forgetting, rather than assuming what you wanted and doing it for you.
(the explicit calls also let you / work better with "go to defintion" on the editor to see their code, learn what to look up in a manual, step in with a debugger, see reasonable function names in stack traces and profiles, pass / require more arguments to the drop call, let the drop call have return / error values you can do something about (consider `fclose`), let the drop call be async, ...)
This would be nice also to have an API that's usable from C/C++ code running in Wasm. I see libraries often do this where they expose a C/C++ library like Postgres to Wasm and the main / documented API is JS and you have to dig a bit to find the C/C++ API if it's possible to access it that way.
You do have to tag struct fields with a macro, but you can attach contexpr-visitable attributes. There's also a static limit to how many reflectable fields you can have, all reflectable fields need to be at the front of the struct, and the struct needs to be an aggregate.
that forEachProp function... it brings back nightmares of when, before variadics, we used to macro generate up-to N-arity functions (with all the const/non-const permutations(.
Now I use the same trick in our code base to generically hash aggregates, but I limit it to 4 fields for sanity.
Nice! Ebiten is a super nice API for Go. Lots there to be inspired by in API design. Another API I like a lot is Love for Lua (which also actually can be used from C++).
Re: the comments on here about the GC etc. -- I've posted about this a couple times before but I've been using a custom Go (subset / some changes) -> C++ compiler for hobby gamedev, which helps with perf, gives access to C/C++ APIs (I've been using Raylib and physics engines etc.) and also especially has good perf in WebAssembly. Another nice thing is you can add in some reflection / metaprogramming stuff for eg. serializing structs or inspector UI for game entity properties. I was briefly experimenting with generating GLSL from Go code too so you can write shaders in Go and pass data to them with shared structs etc.
The compiler: https://github.com/nikki93/gx (it uses the Go parser and semantic analysis phases from the standard library so all it has to do is output C++) (it's a subset / different from regular Go -- no coroutines, no GC; just supporting what I've needed in practice -- typechecks as regular Go so editor autocomplete works etc.)
Very old video of a longer term game project I'm using it on which also shows state-preserving reload on rebuilds and also the editor inspector UI: https://youtu.be/8He97Sl9iy0?si=IJaO0wegyu-nzDRm (you can see it reload preserving state across null pointer crashes too...)
You do need some JS code that asks the browser to run the wasm blob. You can't eg. just have a script tag that refers to a wasm blob yet.
libc does help with things like having an allocator or string operations etc., or for using C libraries that use libc. And that's where emscripten becomes helpful.
Browser functionality like the console or making html elements is exposed through JS interfaces, and the wasm needs
to call out to those. But they may be directly exposed to wasm later (or they may already be at this point in new / experimental browser features).
The hello world in this guide doesn't actually use console.log at all. It adds 2 numbers and sets the page content to the result. All it does is expose an add function from rust and call it from the javascript side.
Grasshopper can indeed drive NURBS surfaces in Rhino if I remember correctly, yes. But it suffers from the same problem I describe initially- you have to start with design from Grasshopper- it is a one-way-flow. There is no way to parametrically model something in Rhino and have it automatically generate a model graph that can then be reused.
This is the main difference between tools like Rhino and the aforementioned parametric modelers (Autodesk Fusion360 is another fully parametric solid modeler that supports a 'tree' build with parameters that can then be changed at will at any time, and the model will rebuild).
What is the end result you want? If you're modeling by hand, why do you want to "program" the model later? What would that look like? What units would you operate on? What would the output be? etc.
An example would be similar to how a catalog / design table is used for generating parts procedurally in software like Catia:
Say you have a CAD model of a special bolt you have made completely from scratch and it is fully parameterized- you can edit the diameter, the shank length, the thread length and pitch, the head size and height, etc. all by changing values in the parameters linked in the design tree. Doing this automatically updates the geometry.
Now say you have a catalog of 1000 sizes of this flavor of bolt. M6 thru M14, different threads, all the dimensions change to very specific standards... and you need to generate files for all those so they can be used by your company in assemblies or sent to others, but individually. Professional tools have ways to take, say, a CSV table of all those numbers and link the CSV to the CAD, and then generate and export STEP files and save them to a disk.
I've done just this example many times over the years in Catia V5. Highly parameterized part catalogs are prevalent in automotive and aerospace companies tied very closely to their CAD and PLM solutions (and thus highly non-portable and very much proprietary...) It would be great if there was some FOSS that was lightweight and similarly capable.
Looking more into it, FreeCAD can actually do something similar, but I don't know if the parameters are accessible to rebuild the models via a Python module outside of the GUI.
> Say you have a CAD model of a special bolt you have made completely from scratch and it is fully parameterized
But what does it mean to parameterize something you made from scratch? Either you built it as a parameterized model, or you didn't. If I understand, you'd basically like to design a model manually, and magically get it parameterized? That's probably possible to design a AI model that attempts to do that.
Every modern CAD tool like those commercial ones I have mentioned automatically track the paramaters as you build a model from scratch. They are parametric models by default. At any point you can go back and edit a number in the tree and the model will attempt to update as if you have used that value in the beginning. It's hard to explain this if you are not familiar with how those programs work.
I'm not talking magic here. What I'm saying is, I'd like the ability to read the model file as text outside the CAD package, and edit and compile (update) exportables in a scripting languge. Again, what I am saying is actually already the way many tools, like Catia and NX, etc work... but they lack an external API/callable module in an external language (Catia can be scripted with VB script, blegh.) And nothing that is FOSS.
I do also really like Go for various reasons, and have been working on a Go -> C++ transpiler and associated ECS libs to make a personal game project with. I used it to make a game for Raylib game jam earlier this year too: https://github.com/nikki93/raylib-5k You can see what the development workflow looks like in this video (the ECS stuff also has a built-in editor like a much more minimal version of Unity's): https://www.youtube.com/watch?v=8He97Sl9iy0
I'm trying to decide how much time I should devote to making this easier to set up / use by other people in the medium term, since it's just a side project for me. Might make a codespaces template so it's quick to get started.
I've been using my own little Go (subset / my own extensions) -> C++ compiler -- https://github.com/nikki93/gx -- and found it to be a fun way to add some guardrails and nicer syntax over C++ usage. You get Go's package system and the syntax analyzers / syntax highlighters etc. just work.
The main point being: the implicitness is not necessary for "compiler makes sure you don't forget". So the original comment about how usage of the explicitly named and paired APIs can clarify intent both for the writer and reader can still stand while not implying that forgetting is involved. I see this dichotomy often being drawn and I think it's important to consider the language design space more granularly. I think a reminder is a better fix for forgetting, rather than assuming what you wanted and doing it for you.
(the explicit calls also let you / work better with "go to defintion" on the editor to see their code, learn what to look up in a manual, step in with a debugger, see reasonable function names in stack traces and profiles, pass / require more arguments to the drop call, let the drop call have return / error values you can do something about (consider `fclose`), let the drop call be async, ...)