Hacker Newsnew | past | comments | ask | show | jobs | submit | bloomer's commentslogin

This is really cool. Have you looked at all at the constant Q transform https://en.m.wikipedia.org/wiki/Constant-Q_transform which is basically a Fourier Transform with logarithmically spaces bins so that it can be aligned with traditional musical notes. Miller Puckette (Max/Pure Data) among others, has a fast algorithm for its calculation.


No I haven't seen this, it's very related. I guess if you wanted to use this for real time, you would still need to solve the question of window size...


Well according to Cuomo’s March 26 briefing the ventilator situation maybe nearly that bad. https://www.bloomberg.com/news/articles/2020-03-26/n-y-death... 3-4 days for non-covid versus 11-21 days for covid.


Yeah, it's certainly more prolonged though even the extreme on that range is a 7x delta.

This would lead needing 2 million to 12 million positive COVID-19 tests with 20% hospitalization rate to get a 20x outcome versus our flu seasons.


Yeah I think this terminology is odd because I think that the computable numbers are much more “useful” than the definable numbers.


For an alternate empirical take:

https://weis2019.econinfosec.org/wp-content/uploads/sites/6/...

They find publishers only have a 4% premium on programatic ads that involve behavioral targeting.


The test they use to select employees correlates with their rating of employees is pretty circular. Any arbitrary test will do that.


I haven’t read this in detail, but a few things stand out. In the write up, you say it compares better than bitwise radix sort. You need to compare against an optimized byte-wise radix sort which is by far the fastest sort for decent sized arrays of integers. The second claim that skyline sort somehow sorts an array of size a faster than two arrays of size n does not make sense. That implies that the time per element goes to zero as the size goes to infinity. That would be faster than O(n), which is what radix sort is, i.e. constant time per element for larger arrays.

It sounds like this is just a variation of counting sort or bucket sort optimized for a small number of unique values.


I know I need to compare it head to head against bytewise radix sort. I might do that and get back with the results. In the meantime, let me clarify. EDIT: Blas Mena and I did test it against radix sort, you can find it https://github.com/daniel-cussen/skylinesort/blob/master/src..., it tests with 4-bit radix sort and radix sort comes out 3% slower.

When I say that it can sort an array of size 2n faster than two arrays of size n it is because if you plot the asymptotic time, you find it's second derivative is negative (concave-up). This is because of the subtraction in the asymptotic time, O(k u - u log u + n). A Stanford professor was reluctant to accept an asymptotic time with a subtraction in it, but I explained it was really the logarithm of a division, ku-ulogu was actually u log(K/u) where K is 2^k, the total number of elements in the empty auxiliary array. log(K / u) is the same as log K - log u and log(K)=k, so you end up with k u - u log u, which has a negative second derivative when k is fixed.

When log u = k, skylinesort does indeed run in linear time, but it runs slower that linear time before that, and log u is never greater than k, by definition.


Daniel also addressed this some, but I want to highlight that sorting one array of size 2n faster than two arrays of size n does not imply per element time decreasing to zero. E.g. if the runtime were n-2^(-n) you would have that property even though the per element runtime tends toward 1.

<rant> That kind of phenomenon (reading too far into an infinite process) is ubiquitous once you start noticing it. Infinitely many universes don't imply the existence of every possible universe (if you subscribe to such theories), pie having infinitely many digits doesn't imply that it encodes every possible message somewhere in those digits (most real numbers have that property, but iirc it's still an open question for pi), and so on. </rant>


The default Android calculator app by Hans Boehm (developer of the Boehm Garbage Collector as well) does this by using the computable real numbers.

https://dl.acm.org/doi/10.1145/2911981

Provides a good overview of how it works and perms website has more information.

What’s cool about the computable reals implementation is you can increase the precision after the fact and it will recalculate up to that precision. Basically it memoizes the steps of the calculation and how they affect the precision.


This isn’t a performance optimization but rather an accuracy optimization. Even if the requested output is a double (64-bits) the intermediate calculations often need to be done to higher precision to get fully accurate answers. Note that the desktop calculator on Android does the same analysis by using computable numbers.

https://dl.acm.org/doi/10.1145/2911981

Is a nice overview.


It's also a performance optimization though, since otherwise would might instead just use 400 digits of precision (or whatever) all the time, and round just the output.


Somewhat, in that it doesn’t use more precision than needed, but the real issue is you can’t just pick some arbitrarily large precision and round at the end. For some calculations, even 400 digits during intermediate steps would not be enough due to catastrophic cancellation and you would need to go even higher precision to get the right answer. It really is about solving an accuracy issue and not an optimization. And determining that you are using sufficient precision to get an accurate answer is an extra cost, so it is always more expensive than just plowing ahead and calculating an inaccurate answer.


Presumably there is a maximum precision or otherwise a seemingly innocuous calculation could run you out of memory.


I would also like to not that encoding the the Option<char> using the unused bits of the return value is a perfectly valid implementation. But that is exactly what it is, an implementation detail. It could work exactly the same way as today but the programmer wouldn’t have to care about how it was implemented, just whether they got a char or None.


In fact, that's exactly what Rust already does today! Option<char> uses the exact same amount of bits to store as plain char, because the compiler has enough information to encode the Option-ness of char in what it knows is a garbage bit of the underlying type.

https://play.rust-lang.org/?version=stable&mode=debug&editio...


Rust guarantees this, actually.


http://c-faq.com/decl/spiral.anderson.html Is the rule I learned to understand C declarations and while the rule there is described as simple I think the examples even without argument types are actually fairly complex.

It also seems telling that no recent language has followed C’s example for declaration style, which is more implicit than explicit.


The "spiral rule" doesn't get at the heart of declarations. It's just by some guy that tried to figure it out on his own, and what he discovered was basically not declarations but the precedence rules of C expressions ;-)

> It also seems telling that no recent language has followed C’s example for declaration style, which is more implicit than explicit.

Actually most languages don't let the user do what C declarations let you do. For example, in Java (almost) everything is an object, and you can't just create a triple-indirected pointer. So, these languages can afford a declaration syntax that is less potent.

And then there are other more systems-oriented languages that chose to not copy C declarations. They come with their own gotchas. As examples I will pick D and Rust.

In D, you create a multi-dimension array like this: int[5][10] arr; Leading you to believe that you can use it as arr[4][9]; Wrong. That's an out-of-bounds error. You need to write arr[9][4]. Now, was that totally not confusing? The alternative is to expand these types systematically to the left, i.e. write [10][5]int, and maybe move the type to the right of the variable name, as in "let arr [10][5]int;". Honestly I don't like that either.

I've never really used Rust (either), but its downside, in my opinion, is that it has much more distracting syntax / punctuation.

I would love if there was a uniformly better way to declare things than the C way, but I still think C has the best tradeoffs for my practical work. The next time that I toy with language design I might try to simply go with C declarations, prefixed with a sigil or "let" or something, to remove the need for the lexer hack.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: