That line refers to training the model from scratch. You can still run the trained model very quickly with one "cheap" GPU.
That said, I'm not sure why one wouldn't get a similar result training on the EC2 or GCE instances that have 8 V100s. Or even training with fewer GPUs but accumulating gradients to get the same batch size.
For all the 3D diagrams that I made (including the animated one at the end) I wrote code that used https://threejs.org/ and my custom library. It worked, but with a lot of hassle. In the future I'll likely try using Blender.
I’ve no idea what nvidia use but you could do this pretty easily in Blender with the Freestyle NPR renderer and the built in import images as planes add on.
I find this ... disquieting. I think its fantastic but I also find something about the lack of uncanny valley troubling.
I should feel happier about it, but I can't stop feeling a bit odd that a sketch can go to photorealistic north of the bad so well now: I expected 5-10 more years for this.
im very glad to find a person who shares my feelings. every time i open my feeds and i see a headline about some kind of machine learning or ai breakthrough, i feel physically uncomfortable. every time i open one of those links there is a chance that it will change the equation of life.
the other day i opened one of those links and it was GTP-2. besides all the insane implications of GTP-2, what bothers me is that i am no longer able to assume that any internet comment is written by a human, no matter how convincing. there are still comments that GTP-2 could not write but anyone who points that out is pretty short sighted because it wont be long before there are vanishingly few comments that could not have been generated. i kind of liked knowing that a person was typing out (almost) all those comments.
one of the biggest realizations ive had recently is that technology does not cut equally in both directions. everyone in my generation has thought of technology as a neutral entity: for every benefit of a given technology, one can point out a corresponding disadvantage. on the surface it seems like the scale dips neither for the societal disadvantages nor for the societal benefits. this is a very fundamental belief. and its wrong. its funny how people put so much faith in such fuzzy logic.
the implications of that realization are difficult to swallow. it means that with every new technology introduced into the world, there is the potential for it to harm peoples quality of life. or improve it. but there is no regulation of technology so its a crap shoot. weve been rolling the dice for a long time and we didnt even know it. and i think weve been winning. but i think that high level automation is not going to be a win for us.
besides all of that, there is absolutely no debate that these advancements in ai are to our generation what personal computers were to the baby boomer generation. without close attention, we will fall behind and our kids will have fluency in the new world of automation while we cling to very old and outdated patterns. in other words, it makes me feel very old.
> one of the biggest realizations ive had recently is that technology does not cut equally in both directions. everyone in my generation has thought of technology as a neutral entity: for every benefit of a given technology, one can point out a corresponding disadvantage.
Maybe that's just because technological innovation is slowing down.
I'm not sure I agree that I see it slowing down. I wish it would for a bit so everyone could catch their breath. Social we are just catching up with the implications of social media and there is so much we haven't come to terms with like CRISPR. It seems like just what we've accomplished in the last 10-15 years would happen over several generations previously. We really aren't ready for the changes that are baked in now.
> I'm not sure I agree that I see it slowing down. I wish it would for a bit so everyone could catch their breath. Social we are just catching up with the implications of social media and there is so much we haven't come to terms with like CRISPR.
Societal changes lag technological ones by at least 5-10 years. So the changes we're feeling now were largely the result of technological changes in the early 2010s. But I do think technology today is slowing down. Individual processor speed certainly has, which has far reaching implications. Cloud computing and GPUs have given general purpose processes another step in "perceived" performance, but those are pretty much one-trick ponies.
If processor individual performance doesn't increase, the economies of scale that a large data center gives you eventually has diminishing marginal returns, and you're again limited by individual processor speed. GPUs similarly give a speed-up for applications that can optimize for them, but eventually they will run into the same performance walls that general purpose chips run into.
Other technologies, like machine learning and much of genetics heavily rely on exponential improvements in the underlying hardware.
If the death of moore's law is really happening, it will have far reaching implications in all computational based industries.
I'm not sure that I agree, the only images on the linked page are pretty tiny thumbnails, and the poorly compressed video - where I can definitely see some artifacting already that doesn't seem to be caused by the video compression, there's no way to know if they actually did a good job from that.
if the thumbnails are the limit, I agree. If you combine what was posted a few weeks back in unreal-faces, and this, don't you wind up in an interesting place (maybe they had higher computational cost or a more limited palette of outcomes)
If this was intruded background in a movie at HQ, I'm not sure I could always tell the difference although its equally possible the renders would be unsustainable under motion, not enough real-world change to look "real" and it re-enters uncanny valley
Imagine the impact on moviemaking that this will have within just a few iterations of processing power. Thousands of hours of visual effect artist work in film and TV will soon be abstracted into some high-level commands, transformed by software into moving film. Very exciting.
Just like Record labels are obsolete because of SoundCloud and home studios, Hollywood could be obsolete. Pure creativity would flow without investors and maniac producers.
The future golden age of indie creators could be fueled by this.
That is one of the best articles I have ever seen on a complex technical subject. A reasonable amount of math, great examples, and animations of the process.
If the massively online education people had that kind of quality, maybe people would actually finish the courses.
Is it really the same thing? The method you cite is a type of style transfer. Input sharp focused image and you get a blurrier version out with the required style. You’re removing information with a particular type of convolution.
The nvidia version seems to inpaint new details into the user segmented areas, like a collage of sorts.
"we first label a photograph or painting by hand to indicate its component textures. We then give a new labeling, from which the analogies algorithm produces a new photograph"
DL approach generalizes over many images and can derive some some "idea" of how given class should look like (ie. a tree)