Hacker News new | past | comments | ask | show | jobs | submit login
Implementing the Goodfellow GANs paper (ym2132.github.io)
104 points by Two_hands 11 months ago | hide | past | favorite | 18 comments



This is a blast from the past, I still remember the StyleGAN demos and how cool it was for its time. https://www.youtube.com/watch?v=Ps7bmdxy0Xc


Right, even though the paper is almost 10 years old I still found it fascinating. I hope you enjoyed the post!


        # shuffle the combined batch to prevent the model from learning order
        indices = torch.randperm(combined_images.size(0))
        combined_images = combined_images[indices]
        combined_labels = combined_labels[indices]
   
You don’t need to do this


Is it better to train without the shuffling or shuffling has negligible effects?


I'd assume there's no real state the network can "remember" between iterations, so shuffling will at best just waste time.


My thoughts had been related to the ordering, but it makes sense that it doesn’t matter. I have read that it is actually better to train the model in separate batches with generated and real images in their own batches before the gradient step.


Are GANs useful for synthetic data generation for transformer based models?


Probably. Apple published a paper back in 2017 about improving synthetic data for the purposes of training models (though not transformers).

The examples they give are for eye and hand tracking -- which not coincidentally are used for navigating the Apple Vision Pro user interface.

https://machinelearning.apple.com/research/gan


It'd be cool to run some tests where you train a model with data and then supplement the training data with generated stuff.


Yes, the concept is still powerful and in use today.

As I understand the RLHF method of training LLMs, this involves the creation of an internal "reward model" which is a secondary model that is trained to try to predict the score of an arbitrary generation. This feels very analogous to the "discriminator" half of a GAN, because they both critique the generation created by the other half of the network, and this score is fed back in to train the primary network through positive and negative rewards.

I'm sure it's an oversimplification, but RLHF feels like GANs applied to the newest generation of LLMs -- but I rarely hear people talk about it in these terms.


I think diffusion models are useful too, I’m currently working on a project to use them to generate medical type data. It seems they'd both be useful as they are both targeted towards generation of data, especially in areas where data is hard to come by. Doing this blog made me wonder of the application in finance too.


I agree -- I would love to see diffusion models applied to more types of data. I would love to see more experiments done with text generation using a diffusion model, because it would have an easier time looking at the "whole text" rather than the myopia that can occur from simple next-token prediction.


Adversarial loss is used in many cases like when training a VAE, and a VAE can use a transformer architecture.



Great writeup, thank you! Nicely done!


Thank you, I appreciate the kind comments!


Cool


Thank you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: