Hacker Newsnew | past | comments | ask | show | jobs | submit | gsliepen's commentslogin

The Intel architecture is already Turing complete when you just use MOV instructions: https://github.com/xoreaxeaxeax/movfuscator. Of course, you don't even need instructions at all: https://news.ycombinator.com/item?id=5261598

I came back to reply with just this. Christopher Domas's conference talk on the movfuscator is legendary:

https://www.youtube.com/watch?v=R7EEoWg6Ekk


While this is true, I suspect a spec compliant implementation of the x86 mov instruction would many use more transistors than OP’s entire CPU.

Of course, but you don't have the toy CPU under your desk or in your laptop running at several GHz nor are you likely to find it in a target that really needs a cute hack to obscure your exploit.

> The Intel architecture is already Turing complete when you just use MOV instructions

No physically existing architecture is Turing-complete, since every CPU can (by physics) only access a finite amount of memory, which means that its state space is finite, in opposite to the infinite state space of a Turing machine.


But that's not a very useful definition so we usually don't both enforcing that constraint.

I wrote a software synth myself with the intention of running it on Raspbery Pi 3 / Zero 2. Those are actually quite capable processers; sound synthesis requires very little RAM, both code and for maintaining state, so everything fits in the rather tiny cache. But at the same time, while these Pis use "little" cores, the maximum throughput of NEON instructions is actually the same as for the corresponding "big" cores like the Cortex-A72. With four cores, you can do in the order of ~10 GFLOPS 32-bit FMA instructions. With a sample rate of 96 kHz and 32-note polyphony, you theoretically have a few thousand FMA instructions per note to spend.

To put this into perspective, five FMAs per sample is enough to implement a biquad filter. This is a very common DSP building block that can implement an idealized version of any of the 2nd-order filters that were ubiquitous in analog synths, e.g. high pass/low pass/band pass/notch/all pass. See the famous Audio EQ Cookbook for examples:

https://www.w3.org/TR/audio-eq-cookbook/

By chaining various combinations of EQ and non-linear distortion (lots of ways to implement this, probably involving more FMAs) and you can build very good simulations of common analog synth signal paths.

Note that gsliepen's example sample rate of 96 kHz is perfectly reasonable in this context; it's more than you need to exceed the limits of human hearing, but it's common to oversample your signal for processing to avoid problems with aliasing.


> My hunch is that many advocates would hesitate to put this in their project Readme, because they know that some companies might actually comply... by not using the code.

Definitely. And not only companies; even Debian rejected some packages because the upstream owners added restrictive "desires" on top of the actual licenses.


Something like MessagePack or CBOR, and if you want versioning, just have a version field at the start. You don't require a schema to pack/unpack, which I personally think is a good thing.

> You don't require a schema to pack/unpack

Then it hardly solves the same problem Protobuf solves.


Arrow is also becoming a good contender, with the extra benefit it is better optimized for data batches.

IMO, the GPU is actually a strength. It might not be the most powerful, but it is supported by the mainline kernel and libraries. Many other phone SoCs throw some binary drivers over the wall if you are lucky, and good luck if you ever want to upgrade the OS it came with.

That was the choice they made, support over performance. It was the right choice, but it doesn't change the fact that it is subpar performance. I am hoping in the future there will be options that allow them to offer both without sacrificing the other.

Well, they probably didn't have a choice in which GPU to use. You choose a SoC and then you are stuck with everything it comes with. What is most amazing is that they put in the effort to make a completely open source, upstreamed driver for its GPU.

I see only two possible paths for another GPU on RPis in the future: either Broadcom drops the VideoCore GPU and switches to Mali or Adreno (which I think is unlikely), or RPi stops using Broadcom SoCs and switches to something completely different. Still unlikely, but now that they have the RP1 chip taking over most of the I/O functions, it would not be too hard for them to change and still make boards that retain compatibility with existing hats.

Still, the RPi is made at a certain pricepoint, which very likely precludes putting in a flagship SoC, so even if they change, I'm not sure it will do wonders for GPU performance.


Tinc unfortunately has a complete lack of maintainers with enough time to dedicate to it.

Tinc 1.1 should make setting up easier; it has a CLI to set up and add nodes without having to manually edit config files. And you can generate invitation URLs which can make it even easier.


And even in projects that are maintained by more than one person, it's usually just a single person responsible for most of the commits.


This is the exact reason I decided to avoid 11ty for my personal website and instead went with Jekyll [0].

[0] https://github.com/11ty/eleventy/graphs/contributors


> But half of people earn less than the mean salary though.

That's incorrect. Half of people earn less than the median salary. Depending on where you live, it could be that a lot more than half earn less than the mean salary.


Sorry yes. Brain fart. I meant median


But... Debian is also a rolling release distro. Just use the "testing" or "unstable" suite. I am using Debian unstable on my main desktop since 1999, and had very little issues with it. The testing suite is the one which filters out most bugs found in unstable, and is something you can definitely use as a regular user.


Can confirm, have been using Debian testing branch on my local server for AI experiments for a year and it works great. Never hit any major issues, always have (reasonably) up to date software.


Right ~ fair enough. I should have clarified I meant Debian stable (and by extension all other non-rolling release distros).


At first glance it looks like this is very useful, but it only gives a speedup for very sparse graphs with an average degree of less than 3, unless your graph is very big, as in trillions of vertices.


Degree less than 6? If m < 3n that means there are three times as many edges as nodes, and each edge connect to two vertices.

So 2d square latices would still benefit.

But yeah, not a total domination.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: