I'm not familiar enough with the field to understand how the "neutral net" part feeds in, other than to do parallel computation on the x-pos, y-pos, (RGB) color-type-intensity tensor interpolated/weighted into a larger/finer tensor.
(linear algebra speak for upscaling my old DVD to HD, that sort of thing)
At the risk of exposing my ignorance, this has nothing to do with "AI", right? It's "just" parallel computation?
yeah, no AI. Its low level computer vision. There is no implicit understanding of the scene to enhance it here. We show the neural nets several examples of low and high quality images it learns a function that makes the low quality looks more like the high quality.
this may make you feel disappointed now, but in the write up we are also pitching this same module to be used in generative networks and other models that do build an understanding of the scene. Lets see what the community (and ourselves) can do next...
Wait, the neural network encodes within itself probability distributions of the various image patches it has seen. This is sort of like AI.
Approaches in the past used heuristics (like finding edges and upsampling them, etc). Those were fragile systems. In this approach, the system learns what's appropriate on its own.
I'm glad to hear that, I feared it might just paste any eyes where it sees some eyes, but like this it might be much closer to what is really in the pixels.
I missed the part, though, where there was some "learning"/"adjusted predication" in the interpolation function(s), rather than just a fixed calculation such as a literal linear interpolation.
I was happy just to be able to tease apart the big equation before the the python code sample, but was too lazy to drill down into what the "delta-x"/"delta-y" factor-functions were.
Still, this was a good presentation: somebody with little to no knowledge of the field, but some math, could get the gist of it. Kudos to the author.
I would expect AI to include some sort "emergent behavior", so in a sense you are correct. If a program does exactly what we expect it to, exactly how we tell it to, it almost certainly isn't AI.
Unless we are telling it to "be intelligent" whatever that means.
> As machines become increasingly capable, facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology.[3] Capabilities currently classified as AI include successfully understanding human speech,[4] competing at a high level in strategic game systems (such as Chess and Go[5]), self-driving cars, and interpreting complex data.
I'm not familiar enough with the field to understand how the "neutral net" part feeds in, other than to do parallel computation on the x-pos, y-pos, (RGB) color-type-intensity tensor interpolated/weighted into a larger/finer tensor.
(linear algebra speak for upscaling my old DVD to HD, that sort of thing)
At the risk of exposing my ignorance, this has nothing to do with "AI", right? It's "just" parallel computation?