Hacker News new | past | comments | ask | show | jobs | submit login

Hi all,

My name is Devine, and I'm to blame for the uxn disaster.. I'm getting this link thrown at me from all sides right now, so I figured I might as well chip in.

I'm a bit more on the design-y side of things, and all these fancy words to talk about computers and retro computing, are a bit beyond me, and reference an era of computing which I didn't really had a chance to experience first hand.

But let's talk about "the uxn machine has quite the opposite effect, due to inefficient implementations and a poorly designed virtual machine, which does not lend itself to writing an efficient implementation easily."

This is meaningful to me and I'd love to have your opinion on this. Before setting on the journey to build uxn, I looked around at the options that were out there, that would solve our specific problems, and I'd like to know what you would recommend.

Obviously, I take it that the author is not advocating that we simply return to Electron, and I take it they understand that re-compiling C applications after each change is not viable with our setup, that Rust is manyfolds too slow for us to make any use of it, and that they understand that our using Plan 9, C is not a good candidate for writing cross-compatible(libdraw) applications anyway.

So, what should we have done differently?

I'm genuinely looking for suggestions, even if one suggestion might not be compatible with our specific position, many folks come to us asking for existing solutions in that space, and I would like to furnish our answers with things I might have not tried.

Here are some directions we tried prior to Uxn:

https://wiki.xxiivv.com/site/devlog.html

Ah, one last thing, this is how you fib the first 65k shorts in uxntal:

https://git.sr.ht/~rabbits/uxn/blob/main/projects/examples/e...




I am not very familiar with Uxn, but the one thing from the article that struck me as an actual problem was that the memory containing the program's code can be modified at runtime. The downsides of that can heavily outweigh the benefits in many cases; it turns "write a uxn -> native code compiler" from a weekend project to a nearly impossible task, for example. This is probably relatively easy to fix, assuming uxn programs do not regularly use self-modifying code. (The article proposes an elaborate scheme involving alias analysis and bounds checks, but something like "store executable code and writable data in two separate 64KB regions" would work just as well.)

The article suggests that Uxn programs can write to arbitrary locations on the filesystem. If that is the case, it seems like it would be really easy to change that in the interpreter. Then Uxn's security model would be essentially identical to how the article describes WebAssembly: the interpreter serves as a "sandbox" that prevents programs from touching anything outside their own virtual memory. This is a good security model and it likely makes a lot of sense for Uxn.

Otherwise, the article seems to be more bluster than substance. Uxn is probably not going to be the fastest way to compute the Fibonacci sequence, nor the most secure way to protect a telecommunications network from foreign cyber-attacks, but it doesn't need to be either of those things to be useful and valuable as a way to write the kind of UI-focused personal computer software you want to write.


The second (or first, maybe) most popular Uxn emulator, Uxn32, has sandboxed filesystem access, and has had it since the beginning. The author of the original article doesn't know what they're talking about.


I use Uxn's self-modification powers quite a bit, I use them mostly for routines that runs thousands of times per frames, so I don't have to pull literals from the stack, I just can just write myself a literal in the future and have it available. I wonder, what about this makes a native code compiler difficult, is it because most programs protect against this sort of behaviors? or that programs are stored in read-only memory?

Some of the emulator sandbox the file device, it is likely to be the way that all the emulators will work eventually.


> I wonder, what about this makes a native code compiler difficult, is it because most programs protect against this sort of behaviors? or that programs are stored in read-only memory?

Say you're trying to "translate" a Uxn program into x86 code. The easiest way to do this is by going through each of the instructions in the program one by one, then converting it to an equivalent sequence of x86 instructions. (There is a little complexity around handling jumps/branches, but it's not too bad.)

But if the Uxn program is allowed to change its own code at runtime, that kind of conversion won't work. The Uxn program can't change the x86 code because it doesn't know what x86 code looks like--it only knows about Uxn code. There are some ways around this, but either they're really slow (eg by switching back to an interpreter when the Uxn code has been modified) or much more complex (eg a JIT compiler) or don't work all the time (due to the halting problem).


It's ultimately a hardware-dependent answer. Self-modifying code fell out of fashion in the 1990's, once pipelined and cached code execution became the norm in desktop computing. From that point forward, correct answers to how to optimally use your hardware become complex to reason about, but generally follow the paradigms of "data-driven" coding: you're designing code that the CPU understands how to pipeline(and therefore is light on branching and indirection during inner loops), and data that the CPU can cache predictably(which leads to flat, array-like structures that a loop will traverse forward through).

Therefore what compilers will actually do is reorder program behavior towards optimal pipeline usage(where doing so doesn't break the spec). This has clear downsides for any programming style that relies on knowing instruction-level behavior. And it is so hard to keep up with the exactly optimal instructions across successive generations of CPU that in the majority of cases, humans just never get around to attempting hand optimization.

The benefit of defining a VM is that you can define whatever is optimal in terms of pragmatic convenience - if you want to write programs that have a certain approach to optimization, you can make some instructions, uses of memory or styles of coding relatively faster or slower, and this leads to an interesting play space for programmers who wish to puzzle through optimization problems. But unless it also happens to represent the actual hardware, it's not going to achieve any particular goal for real performance or energy usage - at least, not immediately. Widespread adoption motivates successively more optimal implementations. But that logic makes it hard to justify any "starting over" kind of effort, because then you end up at the conclusion that the market is actually succeeding at gaining efficiency through its fast-moving generational improvements, even if it does simultaneously result in an environment of obsolescence as the ecosystem-as-a-whole moves forward and leaves some parts behind.

An alternate path forward, one which can integrate with the market, is to define a much more narrow language for each application you have in mind, and be non-particular about the implementation, thus allowing the implementation to become as optimal as possible given a general specification. This task leads, in the large scale, towards something like the VPRI STEPS project, where layers of small languages build off of each other into a graphical desktop environment.


I think a VM for a small, but highly abstract, language like Scheme might address the objections of the author(s) of this article. You might like Chibi-Scheme: https://github.com/ashinn/chibi-scheme

Having said that, IMO, if you're having fun with uxn and its retro 8-bit aesthetic, by all means keep going with that.


I did use Chibi! I've implemented a few lisps, but I always found their bytecode implementation too convoluted and slow, so I went with a stack machine for this particular project. I might at some point, implement a proper scheme in uxntal.

It's neat, but I don't remember seeing a graphical API for it, I'll have a look :)


I'm not aware of one. I was thinking that you could roll your own, just as your Varvara computer defines a graphics device on top of uxn. You'd still gain the benefits of using an existing language.


https://wiki.xxiivv.com/site/devlog.html

> For a time, I thought I ought to be building software for the NES to ensure their survival over the influx of disposable modern platforms — So, I did. . Sadly, most of the software that I care to write and use require slightly more than an 8-button controller.

Seems to me you could have saved a lot of effort by changing gears slightly to target other ubiquitous 6502 machines such as the apple 2 or c64.


I thought so too :)

I did a few C64 test applications both in plain 6502 and via cc65(which works very poorly on ARM btw), but that didn't really work out for us. I ran into issues porting VICE to Plan 9, and I had all sorts of issues with c64 sound emulation.


Check out Project Oberon: https://people.inf.ethz.ch/wirth/ProjectOberon/index.html and "System design from provably correct constructs": https://archive.org/details/systemdesignfrom00mart

And "don't let the turkeys get you down". :)


Oberon is great! I remember reading the book, it gave me all sorts of ideas for Uxn. I love writing Pascal, Modula+Overon's drawing API is excellent. It's much too massive a system for me, and I can't even begin to imagine how I'd bring that over the GBA/Amiga/etc.. but I recommend people go through the book from time to time.

https://wiki.xxiivv.com/site/pascal.html


Yeah, the Oberon OS might be a bit much for your applications, eh? The chip is interesting as a target for a low-tech 32-bit platform. I've heard that the folks at Noisebridge (here in San Francisco) are playing around with making their own silicon ICs.

The "Provably Correct" book presents the work of Dr. Margaret Hamilton (she of Apollo 11, who coined the term "software engineer"). It shows a simple elegant way to make easy-to-use safe programming systems.

I just gotta say, I envy you guys. :)


> re-compiling C applications after each change is not viable with our setup

What setup is that? Or, where can I read more about it? I realize this is kind of irrelevant, since the original article criticizes C and similar languages. But I'm curious.



Why can't you compile C on a raspberry pi 3?? Thats a supercomputer compared to anything that existed throughout the 80s 90s. Especially since your progs seem to be pretty small with kind of vintage retro graphics id imagine they compile basically instantly? I never programmed a videogame but isn't the common pattern to make a game engine plus embedded scripting language intreprter so you dont have to recompile for all the little tweaks? I mean dont get me wrong no reason not to do what you did but i'm not seeing the unique challenges solved, like in what direction r u trying to move the needle here?


It's quite common that people think they know what working on a Pi is like, because on paper, it has all the specs of a very fast machine, but to actually use the thing as a daily driver, is another story altogether.

So, of course you can, I've written every application that now exist on Uxn, in C prior to porting them. There was a moment when I was quite convinced that C was a good candidate for what we wanted to do.

For example, equivalent programs:

C version: https://git.sr.ht/~rabbits/orca-toy/tree/main/item/etc/orca....

Uxn version: https://git.sr.ht/~rabbits/orca-toy/tree/main/item/src/orca....

Compiling Orca C(SDL on Pi), is about 10x~ slower, more battery hungry, (and also for some reason, very upsetting to ASAN), than building C(libdraw on Pi), which is itself about 50x slower and battery hungry than building the Uxn version.

I can compile little C cli tools on the Pi just fine, that's not the issue, building x11 or SDL applications with debug flags is another story. I much rather be assembling uxn roms in this case. I wrote a few cc65 compiled applications prior to building uxn as an alternative, but that was a non-starter on ARM.


I have tried Pi as only PC, I gave up after a short bit. Not to go too much on a tangent, my takeaway is it depends on YOUR life and work whether or not any computer will work.

If you can limit yourself within low resolution, small color pallete, direct use of simple graphics API, basicalyl target the computing experience of when computers had 50mhz CPU. Then its possible to make things blazing fast even on a Pi, using ANY language? I'm sure on a "well ackshually" level there is some difference in how the CPU executes things but practically its like milliseconds and milliamps not anything perceptable.

WIth C Im not sure if I know what im talking about here (i've compiled hello world with gcc myself, otherwise i know how to use Make even though i dont understand it but ive also never needed to understand it, but seems like every project uses makefiles or something like it...) but isn't compiling SDL or libdraw a 1 time thing the very first time you build the program, then each time you make a change to orca.c it compiles pretty quick (until 'make clean')? you dont include the time to re-compile uxn emulator when you say uxn is 50x faster? What is ASAN?


One of the reason I tend to favor Plan 9 is that it's blazing fast compared to DietPi/Raspian-like distros. It does away with most of the rounded corners, alpha, soft-looking fonts - but after a little while, I barely notice they're gone.

I haven't found that the C build systems necessarily make this faster or more pleasant, they often get in my way and make it hard to replicate my environment across devices.

It's that same idea that you just said, that "any language will do", that sent me down the path toward virtual machines, if the language doesn't matter, I might as well pick something that appeals to my aesthetics and maps well with what I'm trying to do.

If Orca was written in Lua for a framework like Love2d, for example, then I wouldn't have to recompile love2d itself, it would be more akin to how uxn works. That's usually why people use scripting languages, I think?

ASAN is an address sanitizer, if you do any c development on arm devices, you'll get pretty familiar to its countless ways of breaking in fun ways.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: