Author may want to implement a labelling method for users for a days to maybe train the discriminator a little bit better. Would be a cool human-in-the-loop exercise.
I was surprised to see that the easiest way to figure out if a face was real was by looking at the background. The face generator seems to be terrible at everything but faces. There are often strange visual artifacts and clipping issues, and the face generator never seems to put another person in the background of the picture.
I read this as "which France is real" and was slightly disappointed when I wasn't able to test my incomplete knowledge of European geography against a neural net.
ML generates some rather bad artifacts. Just look for those.
Even in this[1] difficult comparison you can see the non-human repeating skin patterns on the right and the awkward teeth contour. Also hair-on-skin often looks wet and with unnatural bends.
When comparing wrinkly people then it gets a little harder.
That one is super hard when looking only at the face.
Look at the clothes and necklace. The clothes are different on left and right sides of her face - the moment you see it you can't unsee it and it's obviously wrong.
After yesterday's 10 minutes of watching those fake faces this test was super simple for me, I did like 25 without mistakes which kinda shows the fake generation has a long way to know to fool good eyes.
You can pretty reliably guess correctly if you look for ghosting/blurring/chromatic aberration along sharp edges, e.g. around the eyes, on the chin, and in hair. ML hasn't quite mastered the fine details yet