Hacker News new | past | comments | ask | show | jobs | submit login
Subpixel: A subpixel convolutional neural network implementation with Tensorflow (github.com/tetrachrome)
210 points by jgoldsmith on Oct 1, 2016 | hide | past | favorite | 44 comments



So, basically, this is the thing in a crime detective movie where the forensic analyst is looking at a terrible pixelated surveillance camera still and says "enhance," and the computer magically increases the resolution to reveal the culprit's face.

Just another entry on the "things that are supposed to be impossible that convolutional nets can do now."


And that's how that guy whose face appeared a few times in ImageNet became the world's most wanted terrorist, on the run for thousands of crimes.


Ding ding.

But good luck convincing a jury that a maximum likelihood decode from a few grainy pixels wasn't reliable when it gave a crystal clear output.



I bet he's hiding out with that lab tech whose poor technique lead to their DNA being in hundreds of crime scene samples.


yup, to certain point! there are information theoretic limits though. You can fill in information, but there will be biases to a certain point. in this case defined by the dataset. if the "enhance" is too strong, we should be careful with what we do with the results in forensics.

but man, it can make your internet pics look smooth! :) thanks for the comment!


If you have multiple images of the same scene (for example, from video frames), you should be able to use information across frames for a true enhancement?


Yes, and it's almost ridiculous how well that can work:

https://www.youtube.com/watch?v=ONZcjs1Pjmk


;) sshh! don't say that too loud yet. but, remember our names...


So you're saying this could be useful for stereo imagery and video? ;- )


This (overenhancement) was a minor plot point in Crichton's novel Congo, IIRC.


Here is the ridiculous Let's Enhance supercut that all just became realistic: https://www.youtube.com/watch?v=LhF_56SxrGk

(Created by the super talented duncanrobson)


I imagine you'd want PII erased from the training set, but the danger stands.


Except it might unblur to the face of someone else.


I think it's always problematic to compare to images upscaled via nearest-neighbor. The big pixels are hard to parse for our brain, we detect all the blocky edges.

A good content unaware upscaling would be nice (one of the default photoshop algos)

I also wonder what they used for the downscaling. I see 4x4 pixel blocks, but also some with 3px or 7px lengths.

This looks pixely and is supposed to be a source file?: https://raw.githubusercontent.com/Tetrachrome/subpixel/d2e28...


from trustswz' comment:

https://arxiv.org/abs/1609.04802

The pic with the boat on page 13 is interesting. In the SRGAN version I would take the shore for some sort of cliff, while the original shows separated boulders.


Interesting image "upscale" algorithm.

I'm not familiar enough with the field to understand how the "neutral net" part feeds in, other than to do parallel computation on the x-pos, y-pos, (RGB) color-type-intensity tensor interpolated/weighted into a larger/finer tensor.

(linear algebra speak for upscaling my old DVD to HD, that sort of thing)

At the risk of exposing my ignorance, this has nothing to do with "AI", right? It's "just" parallel computation?


yeah, no AI. Its low level computer vision. There is no implicit understanding of the scene to enhance it here. We show the neural nets several examples of low and high quality images it learns a function that makes the low quality looks more like the high quality.

this may make you feel disappointed now, but in the write up we are also pitching this same module to be used in generative networks and other models that do build an understanding of the scene. Lets see what the community (and ourselves) can do next...


Wait, the neural network encodes within itself probability distributions of the various image patches it has seen. This is sort of like AI.

Approaches in the past used heuristics (like finding edges and upsampling them, etc). Those were fragile systems. In this approach, the system learns what's appropriate on its own.


This is not AI in any real sense. It is a fairly straightforward machine learning application to computer vision.


Knowledge is a form of compression


I'm glad to hear that, I feared it might just paste any eyes where it sees some eyes, but like this it might be much closer to what is really in the pixels.


Everything that we understand how to do is "not really AI". It's only "AI" when it's still a mystery. At least that's the way people act.


Fair enough :-)

I missed the part, though, where there was some "learning"/"adjusted predication" in the interpolation function(s), rather than just a fixed calculation such as a literal linear interpolation.

I was happy just to be able to tease apart the big equation before the the python code sample, but was too lazy to drill down into what the "delta-x"/"delta-y" factor-functions were.

Still, this was a good presentation: somebody with little to no knowledge of the field, but some math, could get the gist of it. Kudos to the author.


I would expect AI to include some sort "emergent behavior", so in a sense you are correct. If a program does exactly what we expect it to, exactly how we tell it to, it almost certainly isn't AI.

Unless we are telling it to "be intelligent" whatever that means.


> As machines become increasingly capable, facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology.[3] Capabilities currently classified as AI include successfully understanding human speech,[4] competing at a high level in strategic game systems (such as Chess and Go[5]), self-driving cars, and interpreting complex data.

(from https://en.wikipedia.org/wiki/Artificial_intelligence)


This makes a lot of sense and has the added benefit of forcing us to reconsider what we mean by "intelligence."


It seems that this subpixel convolution layer is equivalent to what is known in the neural net community as the "deconvolution layer" but it is much more memory and computation efficient. The interlacing rainbow picture was a bit hard to understand until I read this https://export.arxiv.org/ftp/arxiv/papers/1609/1609.07009.pd...


Interesting. They should post more examples (not with just faces), or make an online demo, like waifu2x [1]

[1] http://waifu2x.udp.jp/


I'm not sure, but there seems to be something wonky in the input images. They are very blocky, so I thought that they would be just pixel doubled (or quadrupled) from low-res pictures, but the blockiness lacks the regularity I'd expect from pixel-doubled images.

How were the input images prepared?


Super wonky indeed. Also it should compare to something like photoshops bicubic enlargement or the original size, because the brain gets stuck on the pixel edges.


If you are interested in how it compares to bicubic or the original. Check these papers using the sub pixel convolutional layer: https://arxiv.org/abs/1609.05158 https://arxiv.org/abs/1609.04802.


thanks, impressive


The explanation in the README of the github project is excellent and well-written! Here's a really great set of animations by Vincent Dumoulin on how various conv operators work: https://github.com/vdumoulin/conv_arithmetic


And https://arxiv.org/abs/1603.07285 for the corresponding paper. Really clear and easy-to-understand explanation of some of the math.


This is impressive! But, I'll be really impressed once this 'new thing' brings us roto masks in motion. That is, isolating objects from background on a movie with pixel-perfect accuracy. It will also make a lot of people out of job and a lot of people happy at the same time.


Considering motion blur, "pixel-perfect" is a difficult requirement.


There are ways around it. In case of motion blur, edge has a 'feather' where mask's alpha is a gradient. In severe cases, mask's curve has a lot of control segments with each having a different in and out feather defined.

edit:

example: https://youtu.be/yZyIYUEfT3U?t=71

Also, masks themselves can be motion blurred, and if motion blur approximation is close enough to the footage, then it's good https://www.youtube.com/watch?v=biginQL6NIo

And, what it looks like pulling a matte with state-of-the-art tools https://www.youtube.com/watch?v=8oQqr6Lfmag Still a pain.


And I always wondered how those photo enhancers in Blade Runner worked...!


ENHANCE. ENHANCE.



The problem with subpixel images is that there are RBG and GBR monitors. Not only that, there are horizontal and vertical variations. And there's no way to tell which one the user is using on the web. And that's not even counting all the mobile number like pentile.

It's still useful though, browsers, for instance, could use it for displaying downscaled images.


This project is using 'subpixel' not to refer to monitor subpixels, but instead, lost information between existing pixels in an image.

You're right though, and that's why chroma hinting for subpixel AA has fallen out of favor. It also doesn't work on mobile where the screen can be rotated from RGB-horz to RGB-vert at a moment's notice. This was changed for ClearType in Windows 8 (DirectWrite never did chroma hinting).


this is supposed to be used in the data processing step. you load your image from jpeg or your video using ffmpeg, enhance the images and then pass it to the next step where color rendering is done. you can do that in the browser or mobile just as fine.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: