So, basically, this is the thing in a crime detective movie where the forensic analyst is looking at a terrible pixelated surveillance camera still and says "enhance," and the computer magically increases the resolution to reveal the culprit's face.
Just another entry on the "things that are supposed to be impossible that convolutional nets can do now."
yup, to certain point! there are information theoretic limits though. You can fill in information, but there will be biases to a certain point. in this case defined by the dataset. if the "enhance" is too strong, we should be careful with what we do with the results in forensics.
but man, it can make your internet pics look smooth! :)
thanks for the comment!
If you have multiple images of the same scene (for example, from video frames), you should be able to use information across frames for a true enhancement?
I think it's always problematic to compare to images upscaled via nearest-neighbor. The big pixels are hard to parse for our brain, we detect all the blocky edges.
A good content unaware upscaling would be nice (one of the default photoshop algos)
I also wonder what they used for the downscaling. I see 4x4 pixel blocks, but also some with 3px or 7px lengths.
The pic with the boat on page 13 is interesting. In the SRGAN version I would take the shore for some sort of cliff, while the original shows separated boulders.
I'm not familiar enough with the field to understand how the "neutral net" part feeds in, other than to do parallel computation on the x-pos, y-pos, (RGB) color-type-intensity tensor interpolated/weighted into a larger/finer tensor.
(linear algebra speak for upscaling my old DVD to HD, that sort of thing)
At the risk of exposing my ignorance, this has nothing to do with "AI", right? It's "just" parallel computation?
yeah, no AI. Its low level computer vision. There is no implicit understanding of the scene to enhance it here. We show the neural nets several examples of low and high quality images it learns a function that makes the low quality looks more like the high quality.
this may make you feel disappointed now, but in the write up we are also pitching this same module to be used in generative networks and other models that do build an understanding of the scene. Lets see what the community (and ourselves) can do next...
Wait, the neural network encodes within itself probability distributions of the various image patches it has seen. This is sort of like AI.
Approaches in the past used heuristics (like finding edges and upsampling them, etc). Those were fragile systems. In this approach, the system learns what's appropriate on its own.
I'm glad to hear that, I feared it might just paste any eyes where it sees some eyes, but like this it might be much closer to what is really in the pixels.
I missed the part, though, where there was some "learning"/"adjusted predication" in the interpolation function(s), rather than just a fixed calculation such as a literal linear interpolation.
I was happy just to be able to tease apart the big equation before the the python code sample, but was too lazy to drill down into what the "delta-x"/"delta-y" factor-functions were.
Still, this was a good presentation: somebody with little to no knowledge of the field, but some math, could get the gist of it. Kudos to the author.
I would expect AI to include some sort "emergent behavior", so in a sense you are correct. If a program does exactly what we expect it to, exactly how we tell it to, it almost certainly isn't AI.
Unless we are telling it to "be intelligent" whatever that means.
> As machines become increasingly capable, facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology.[3] Capabilities currently classified as AI include successfully understanding human speech,[4] competing at a high level in strategic game systems (such as Chess and Go[5]), self-driving cars, and interpreting complex data.
It seems that this subpixel convolution layer is equivalent to what is known in the neural net community as the "deconvolution layer" but it is much more memory and computation efficient. The interlacing rainbow picture was a bit hard to understand until I read this https://export.arxiv.org/ftp/arxiv/papers/1609/1609.07009.pd...
I'm not sure, but there seems to be something wonky in the input images. They are very blocky, so I thought that they would be just pixel doubled (or quadrupled) from low-res pictures, but the blockiness lacks the regularity I'd expect from pixel-doubled images.
Super wonky indeed. Also it should compare to something like photoshops bicubic enlargement or the original size, because the brain gets stuck on the pixel edges.
The explanation in the README of the github project is excellent and well-written! Here's a really great set of animations by Vincent Dumoulin on how various conv operators work: https://github.com/vdumoulin/conv_arithmetic
This is impressive! But, I'll be really impressed once this 'new thing' brings us roto masks in motion. That is, isolating objects from background on a movie with pixel-perfect accuracy. It will also make a lot of people out of job and a lot of people happy at the same time.
There are ways around it. In case of motion blur, edge has a 'feather' where mask's alpha is a gradient. In severe cases, mask's curve has a lot of control segments with each having a different in and out feather defined.
The problem with subpixel images is that there are RBG and GBR monitors. Not only that, there are horizontal and vertical variations. And there's no way to tell which one the user is using on the web. And that's not even counting all the mobile number like pentile.
It's still useful though, browsers, for instance, could use it for displaying downscaled images.
This project is using 'subpixel' not to refer to monitor subpixels, but instead, lost information between existing pixels in an image.
You're right though, and that's why chroma hinting for subpixel AA has fallen out of favor. It also doesn't work on mobile where the screen can be rotated from RGB-horz to RGB-vert at a moment's notice. This was changed for ClearType in Windows 8 (DirectWrite never did chroma hinting).
this is supposed to be used in the data processing step. you load your image from jpeg or your video using ffmpeg, enhance the images and then pass it to the next step where color rendering is done. you can do that in the browser or mobile just as fine.
Just another entry on the "things that are supposed to be impossible that convolutional nets can do now."