Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Fake or real? Try our AI image detector (nuanced.dev)
22 points by aymandfire on March 18, 2024 | hide | past | favorite | 48 comments
Hey HN! We're Ayman and Dylan, co-founders of Nuanced (https://www.nuanced.dev/). We want to share a tool we’re working on to detect fake and real images: https://trial.nuanced.dev/demo/.

The UI is bare-bones but you’ll get the idea. Drag or upload an image and our tool will display the probabilities with which it thinks that the image might be AI-generated or not. If you want, you can click “No, it’s AI” to confirm that the image was AI-generated, or “No, it’s real” to confirm that the image was not AI-generated.

Why we’re working on this: as AI-generated images continue to blur the line between real and artificial and their adoption and quality rises, so too does the risk for fraud and misinformation. Not being able to trust what you see online threatens whatever level of "realness" or authenticity online material has. Companies like dating apps, news sites, and trust and safety teams have a growing need to distinguish AI-generated images from authentic ones.

The models we built are trained on various architectures, such as Dalle-3, Midjourney, and SDXL, with continuous integration of data from the latest AI image generators. Our technology can detect deepfakes and verify user profile images, documents, IDs, or media images. Additionally, it can detect fake or counterfeit products, services, or experiences being marketed on e-commerce platforms.

We hope it’s fun and would be very interested in any cases it gets wrong, as well as whatever else you’d like to ask or say!



I uploaded a drawing I had sitting around my desktop and it was 75% confident that it was AI. It's one I did myself in Adobe Illustrator so I guess I can say I'm 100% confident that it was AI, but a different expansion of that abbreviation.


The real AI was the Adobe software that Figma obsoleted along the way :)


AI == An Illustrator!?

You got me thinking down the path that sometimes when people say AI they mean something very different than what I thought AI meant.


Adobe Illustrator user files always had `.ai` file extension


Adobe Illustrator. I edited the comment to make that clear. :)


lol. Now I will know what people are talking about at all the cool tech parties.


I think they meant Adobe Illustrator.


Adobe Illustrator


I was thinking about this the other day. The issue is if you have an imperfect test to detect fakes, it gives even more credibility to fakes that pass the test.

If there are no tests however, then were left to question the validity of everything


What makes it even more of an issue is that's comparatively easy to generate a 1000 images of a scene and push them all through until you get one that happens to line up in such a way as to pass through the detection (as compared to having to physically paint 1000 scenes).


Right exactly, much like p hacking with studies. Publish enough research papers and you get significant results by chance, and it gets into a headline as "Science shows..." and people believe it.

Unfortunately in this space the amount of deepfakes we can create is massive, so even a system thats 99.999% accurate will leave a lot of fakes through and grant them credibility.

I think we have to focus on an alternative path where we assume any digital content can be fake, unless its creation is provable through some verified sources or methods.

Maybe it's possible to embed some kind of digital fingerprint into images? So as to say, this image was definitely taken by this camera. Pretty sure something like this has been done.


I'm not even sure the "digital fingerprint" is entirely helpful. Have you seen the moon issue? https://www.reddit.com/r/Android/comments/11nzrb0/samsung_sp...


I ended up reading that whole thing, but by digital fingerprint I meant something like embedding a cryptographic checksum in an image. Or some way of proving an image is produced by a specific camera. Itd be great for example if we could verify a security image video was real and not a fake


Continuing this thought, consider a photo may be genuine but the actual scene is faked. Will pass all tests.


Does any image that's been touched by AI count as fake? For instance, if you took a real photo and asked AI to widen it by 1 pixel, you could argue that this is a new "fake" image generated by AI, but it's 99.9% real. What about something that's been AI-upscaled, like with DLSS?


It shouldn't classify the example you explained as AI generated, but we are looking at expanding some functionality for similar use cases to that. W.r.t AI-upscaling, the current model isn't looking for it specifically, since many AI generated images may have been upscaled at some point without us necessarily being able to denote it as such when labeling the data.


Cool hack to get some Human Feedbacked data.

Unfortunately your system doesn't seem to be able to upload an image.

https://trial.nuanced.dev/demo/upload_progress has an event stream that polls every 2 seconds or so but doesn't seem to return any success criterion.


Data collection for adversarial training was also my first thought. The same training data used for classifying images as AI-generated can also be used to train the AI to generate images that fool more people.


Well the company's job is to detect deep fakes/photoshops. From their site.

Detecting Authenticity in the age of AI We detect AI-generated images to protect the integrity and authenticity of your service.


True enough, though we're interested in stress testing our model to see where the gaps are right now.


Are you able to use the image upload box to select a file or drag-and-drop images?


I think iPhones do some pretty complex image processing, and other brands of phones go even farther. I believe there have been instances of phones adding in completely new details (although maybe I hallucinated that detail).

What’s the expected result and also what should we put as a “true” answer if we take a picture with our phones and upload it?


In the case of pictures taken with your phone, the "true" answer would be "real" as it's not a synthetically generated image, just some post-processing/cleaning-up. The overall classification is going to be affected by how much of the image has been altered, if it's only a small part, it shouldn't effect the overall outcome too much.


Samsung devices, for example, will add detail to what it thinks is the Moon.

https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...

> The test of Samsung’s phones conducted by Reddit user u/ibreakphotos was ingenious in its simplicity. They created an intentionally blurry photo of the Moon, displayed it on a computer screen, and then photographed this image using a Samsung S23 Ultra. As you can see below, the first image on the screen showed no detail at all, but the resulting picture showed a crisp and clear “photograph” of the Moon. The S23 Ultra added details that simply weren’t present before. There was no upscaling of blurry pixels and no retrieval of seemingly lost data. There was just a new Moon — a fake one.


I think the example of a person in the background's face was replaced with a leaf was from an iPhone trying to clean up the image and got confused.

Of course my favorite is the phone that recognized a humanoid shape and placed Ryan Gosling in the image.

https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-a...


If the iPhone one is the one I’m thinking of, it turns out the leaf was actually there.

https://twitter.com/mitchcohen/status/1476951534160257026


Nah, Samsung was substituting in a really good picture of the moon for people's regular moon pictures: https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...


I tried with this eclipse photo: https://www.cnet.com/a/img/resize/535a36e2cb72f06e9b3dc04254...

It said 92% AI. Do you have any stats about how often it gets it right ?


We're generally seeing an accuracy of 90% on test sets, with its distribution primarily being images generated by diffusion models and the "real" images that are more ground level imagery. We'll have to take astrophotography into account!


It’s actually pretty accurate for the images I had on hand. It only failed once you started giving it artwork rather than photos.

Low quality/distorted images also come out as AI


That's great to know actually, especially the low-quality/distorted images bit.


I like how confident the model is when you provide an anime image, the model always thinks it's AI even though I only provided images created by humans, I don't think I've ever seen a worse AI detector in my life. I hope that this is not a real demo of the product.


> don't think I've ever seen a worse AI detector in my life. I hope that this is not a real demo of the product.

"I tried it on anime images and it didn't work well on that class" would have been sufficient.


That's a distinction only relevant for research, not an actual product.

People can and do use AI detectors on any art style, often to justify harrassment of suspected AI use. In the anime community, it's especially very common to do so.

A bad AI detector is worse than no AI detection at all, so extra scrutiny is justified.


Wasn't aware of that context in the anime art community, that's good to know about, so thanks. Currently, artist drawn anime images weren't a primary stylistic focus, so the gap makes sense.


Heh, I tried the exact same thing. I uploaded sketches from my favorite artists. Specifically, sketches produced between 2012 and 2016. All of them were identified as AI with greater than 50% probability.

Of course, if one uploads recent sketches, one could be cynical and claim the artist traced over AI-generated image. But I have never seen this done in practice


I fed it a bunch of real images and it failed on all of them.


Shows the Kate Middleton photo as 90% real, 10% AI - https://www.dropbox.com/scl/fi/npww1z7n9su7qgx410qgp/Screens... - the source may have tested it and thought 90% was good enough...




Do you know what base model was used to generate the image? Was it SD v1.5?


s/try/train/


seconding this, it failed to correctly identify an AI image which had the DALL-E watermark clearly visible

what plans are there to guard against people intentionally poisoning your training data by miscategorizing the images they upload for classification?


Very poor accuracy, I only have it ai photos and they were randomly 60% real or ai. It's practically useless.


Failed stable diffusion test. Obviously a good idea but no the tech doesn't deliver.


Given a random set of realistic-looking real and AI images, we have found that humans usually score in the 65-80% accuracy rate. You can give it a try here: https://sightengine.com/ai-or-not


I was pretty dead on with photos of people. Especially if they're in color.

And it's not just a hand thing. There's often an element of surreal excess or a kind of uncanny valley/plasticy thing going on. If I had to point something out, it would be skin. AI seems to be bad at generating skin, it has a slightly cartoony look to it. If I were to venture a guess, it's because of the number of photos out there filtered to shit.

I was the worst at macro(?) landscape photography. I think that's what it is. Whatever it is when you essentially take a picture from far away, but zoom in and focus so the foreground and background are both in focus. That's close to 50/50.


Yes, zooming in on skin usually helps a lot to recognize raw GenAI outputs.

The examples chosen in this test were not collected to be very adversarial, and no additional processing was done.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: