Hacker News new | past | comments | ask | show | jobs | submit | Corendos's comments login

That actually makes a lot of sense to improve performance on specific parts. Think about generating lookup tables at compile time for example. Instead of having a separate script generating them, you can keep the generation close to the place you use it, written in the same language and always up to date.


Not really but quoting the Readme:

"RustyHermit is a rewrite of HermitCore in Rust developed at RWTH-Aachen."

They wanted to keep the Hermit most probably


I mean, if you have one task to do only, multi-tasking is kind of pointless yes


It is nice to be able to listen to a podcast in the background while playing a game, and have the sound come out the same speakers/headphones. It's much harder to do that if the game runs bare metal.


Each of those services can be a unikernel being managed by a type 1 hypervisor.


Absolutely. But I think I'm correct in saying if today I wanted to play a game shipped as a unikernel, it would be pretty difficult to have the game and spotify running. First we need a spotify unikernel, then we need people to accept having to use the new unikernel version of spotify just when they play the game(s), maybe get used to the spotify unikernel not having access to graphics hardware, etc. Or we run spotify in a traditional OS in the hypervisor and lose all the benefits we were shooting for in the first place.

Technically all manageable, but that wasn't the aspect I was focusing on, rather the reality that we couldn't just do this today, or even really all that soon.

Alternatively, PCs become games consoles when running games, and you just can't do multitasking, just like you mostly can't on a PS2, etc. Honestly, I think this would be the most likely outcome if games went down this road.


In a way it is a return to the days of playing games in 8 and 16 bit platforms. :)


This was my internship subject (in another company) just before I graduated, I wonder what they used for the Perceptual Hash, ours was SIFT features. Happy to see that what I implemented would have been able to scale that much !


So when you run SIFT on an image, one gets dozens (maybe hundreds) of SIFT features back. The trouble with SIFT features is that each individual SIFT feature is a local image descriptor -- it describes a single point in the image. One can't just append the two lists of SIFT descriptors together and do a Hamming comparison on them, because it's not guaranteed that both images will have all of the same SIFT descriptors, nor that they would be in the same order. When you want to do image comparison on image descriptors, one must compare every local feature with every local feature in every other image. This is great for comparing two images together, or for finding where one image is located in another image (homography matching), but this does not scale for large image sets.

In contrast, descriptors like perceptual hashes look at the entire image, and so are a _global_ image descriptor.

There are ways to convert local SIFT image descriptors into a single global image descriptor for doing more rapid lookup (Bag of Visual Words is one technique that comes to mind), but SIFT and pHash really are in two categories all their own.

More info on pHash: https://hackerfactor.com/blog/index.php%3F/archives/432-Look...

Example of SIFT for fine-grained image matching: https://docs.opencv.org/3.4/d1/de0/tutorial_py_feature_homog...


I have found the ML image categorization models an excellent method of extracting a unique descriptor. It is possible to compress the image for matching and storage into a compact signature.

I did it here: https://github.com/starkdg/phashml

https://github.com/starkdg/pyphashml

It is available in a python module that uses tensorflow model.

Feel free to message me.


thanks, first link 404s


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: