Hacker News new | past | comments | ask | show | jobs | submit login
How Hackers Stole 200,000 Citi Accounts Just By Changing Numbers In The URL (consumerist.com)
360 points by mmavnn on June 15, 2011 | hide | past | favorite | 159 comments



This is unacceptable, obviously. It's a massive failure of their security testing protocols. I'm not particularly surprised that an vulnerability like this would get written into the code, it's an easy mistake for an inexperienced developer to make. But I'm not going to pile on the developer. We laud "separation of concerns" in our architecture, and this pattern applies to the organization of software development teams.

I don't expect every developer to be aware of every vulnerability. But I do expect that a financial institution has a specialist somewhere that audits the code before it is sent for testing ("white box"), and then I expect them to have an independent audit team probe for vulnerabilities ("black box").

After the inexperienced developer has had his code rejected for various flaws, he will become quite aware of the obvious ways things like this can go wrong.

Don't get me wrong, I expect that the vast majority of developers wouldn't make this mistake in the first place. But if you aren't a specialist, it is pure hubris to think that you write code that is hardened against all of the attacks out there. And if you have a vulnerability, it really doesn't matter if it's an embarrassingly simple vulnerability or one that requires sophisticated techniques to uncover and exploit. Either way, you're road kill.

High-value targets like banks need two security specialists (the code audit and the penetration test) to accompany the development specialists. That's simple separation of concerns, and it works as well in team organisation as it does in code organisation.


I don't know anyone at Citi, but I do know several firms that do audit & pentest work there, and I'm told it's high-volume --- in other words, it's likely every app gets tested. This is a startling miss.

When you engage a software security firm to check out an app, you're trusting that their report is reasonably complete. As a practitioner, this is basically your nightmare scenario. You know in the back of your head that it's always possible you're going to miss things, but a vulnerability this trivial seems like the kind of thing you find on the first day of testing.


This is the kind of vulnerability that should be caught on the whiteboard before a single line of code is written.

I know this isn't what you're talking about, but I hope that an institution that's protecting my money will formally specifying their protocol and not haphazardly putting their app together with "Test Driven Development". But maybe that's how they managed to be this stupid.


Though there are no doubt very, very smart developers scattered across Citi, please bear in mind that the wiring-up-form-fields-using-J2EE jobs in large financials are among the least prestigious in our field. These are the types of apps that are among the first to get outsourced.

Your idea that the input and output paths through a typical banking J2EE app were carefully whiteboarded seems mythologized, based on what I've seen at fisrv companies.


That's hair-raising. I have only a small amount of financial firm experience. I'm just used to ordinary software development being white-boarded first.


People forget that statistically the devs hanging around on HN are in the top brackets. The overwhelming majority of devs are 9-5 paycheck players with exactly the level of competence required to execute J2EE and .NET CRUD apps that pass internal acceptance testing.

If you are an enterprise dev with a knack for making web apps that are't horribly insecure, and you don't have a phantasmagorically awesome comp package already, more advice: start looking for new gigs. You're underemployed.


The problem is to the PHB's in HR, there doesn't appear to be a difference between you and the guy who just makes it work, security-be-damned except that you likely take longer and cost more.


What's the lower bound for a phantasmagorically awesome comp package? I am not good at evaluating the fairness of my current salary.


In a market like the New York, Boston, or Seattle areas, with 5+ years of experience, a top tier software engineer should be able to pull down $100k salary plus 401(k), a good health insurance package (ie, something better than a high deductible plan plus HSA), and long term/short term disability insurance. Adjust the salary number for your local cost of living.


Online bank websites are, at least in the UK, universally terrible. They seem to be just a very, very thin layer on top of the bank's back-end systems, giving such delightful results as being two different sorts of recurring payments and two types of non-recurring payments I can do from my online banking (actually, I can't set up or cancel every type from within online banking.)

Do banks make so little from checking/current accounts that it isn't worth spending a dollar per customer to build a decent solution?


Customers pick banks based on how many ATMs they have, not the quality of the bank's web apps.

Though there clearly are people who care deeply about bank UX, those people are not normal.


(UK has a shared ATM network, so it's not the same issue it is the States where I fondly remember hunting for damm BofA machines. I agree there are other reasons to choose a bank than website.)

However, it seems like there is relatively little to distinguish between banks, and a web service that truly pleased people might be a good way to get and retain customers, whilst cutting costs (branches and call centres.)


Even for those people who do care about the quality and security of their bank's web services, how many of them are really going to take the time to compare the web services of various banks?

How do you compare them, anyway? It's not like banks give out trial accounts you could use for this purpose.

You'd have to go to the trouble of signing up for a real bank account at each of the banks whose web services you'd like to evaluate. Then spend the time to try them out and compare. So even a cursory inspection of a variety of banks' web services would already be a pretty huge hassle.

Even if you go to all this touble, you're probably not going to know much more than how intuitive a given bank's web interface is, or if they did something glaringly awful in terms of its usability.

You wouldn't know how accessible the interface is under load, or test how well the bank's interface handles the various real-life corner cases you're likely to encounter over time.

And how would you test the interface's security? Is even a reasonably security-aware consumer really going to try breaking in to the bank?

The ironic thing is that most users who use online banking probably do care about the web interface and its security, even if they can't articulate it in so many words. But of them, only a small minority care enough and aware enough of the issues to take the time to comparison shop. And even then they probably won't find out much.

There's an opportunity here for a Consumer Reports style investigation in to the web interfaces of various banks. That would be a valuable service that could save the consumer a lot of time and hassle, and maybe even do things that a lone consumer couldn't: evaluate interface security.


I did just that when I last needed to open a bank account :) I googled for users' comments about the bank's web interface (was it working fine in non-IE browsers, not using Java applets, etc).


Out of the major banks in Canada, the one with the most ATMs is the smallest--and incidentally also has the highest Forrester's marks in online banking.


I agree with you about planning, but "test driven development" isn't haphazard; it's not just clicking around to see what happens. It's a way of formally specifying your design that simultaneously confirms that your code does what it should.

For instance, you write a test that says the user should get a "not authorized" if they try to access another user's account. You run the test, and it fails. You change your code to make it pass. And every time you update your app from then on, you re-run the test and make sure you didn't make that test fail.

Having a design on paper isn't as useful as having one that you can PROVE your code conforms to.


There is no guarantee that anything specified on a whiteboard will actually make it into physical code, particularly if it's handed down from on high. Banking websites are terrible not because they use TDD, but precisely because of the supposition that a good architect and a whiteboard is enough safeguard to skip them.

Tests make it possible to refactor your code towards simplicity and clarity, making security errors more clear. If you're writing a banking application, ignoring them is simply foolhardy.


So you're taking a ridiculous loophole created by enterprise architects that probably spent six months whiteboarding this shit before letting the devs write a single line of code as an excuse to troll on the subject of TDD? Lame.


Firstly this kind of vulnerability requires no white-boarding. It is fair to assume that a dev with a reasonable understanding of how the web protocols work will not need a white board to figure this out. So firstly the argument of catching this on a WB is a little juvenile.

Associating TDD to being haphazard is pretty random. Being hap hazard and TDDing are really in 2 different math spaces. Care not to mix em?


Could you please elaborate on why you perceive a correlation (or causation?) between TDD and haphazard development?


If you're "haphazardly putting your app together" then you're not doing "test driven development".

TDD and using your brain coexist quite happily. But if you're not using your brain, then TDD won't save you.

So no, that's not how they managed to be this stupid. They managed it through not using their brains.


Citi fails on a lot of other basic common sense practices. For example, their password fields are limited to 8 characters.


Unrelated anecdote:

When working with a substantial financial organisation, there were strict rules prohibiting my team from communicating with the code audit and other security teams. Going for coffee with one of the internal audit people would get you both fired. The external testing was done by a security consulting firm. Their reports were passed on to me with the identifying details stripped out so that I wouldn't even know who they were.

Talk about "low coupling!"


No doubt the goal there is to prevent collusion between a developer who introduces a subtle but exploitable bug and the auditors/testers who might be interested in failing to find such a bug, for an appropriate percentage of course.


It shouldn't matter to you who the external firm is. Stripping out their contact info, at least in the archives, is probably smart; it makes it easier for the company to keep multiple security firms working on their projects. Which, when you're a huge bank, is what you want to be doing.


Sadly the problem is at a higher level than that, and not restricted to security. Today software development might as well be magic. There are no reliable objective methods for determining developer or software quality. The only method that's proven to work well is the subjective judgment of another "magician". The same applies to software security as well.

This creates a boot strapping problem. If you're an organization of magicians then you have a comparatively easy time finding other good magicians. But if you don't have that hsitory and you still have the need you won't be able to know the quality of people you're hiring except on the least granular of scales (lots of failed and low quality projects). This is a big reason behind so much of the problems of software development in "the enterprise". Because people who know how to run a bank or a media company haven't the slightest clue how to run software development, and there generally aren't trusted 3rd parties that people can go to for help.

Worse yet, there are perverse incentives at play. If you can't tell the difference between a good security person and a stunningly mediocre one, you are going to go with the cheapest one who you think is good enough (generally: solid job history, resume fully buzzword compliant, etc.)


>>> Today software development might as well be magic. There are no reliable objective methods for determining developer or software quality. The only method that's proven to work well is the subjective judgment of another "magician".

Try replacing software development with writing english.

I am going to stop banging on about this eventually, but reading and writing code is something that needs to be understood at all levels of a company before that company gets good at it. If the boss of the company and everyone who reports to him is illiterate, one cannot expect the memos and the policy documents and the amusing posters to be of any decent quality


Hmm,

Perhaps the problem is the "security specialists".

I agree that just about any reasonably skilled developer knows to, at the absolute minimum, include a user-specific session key with every transaction.

But I've also seen "security specialists" get a pass for a variety of bad behaviors. And we've heard of the complete incompetence of HBGary. My hunch is that the secrecy/obscurity that the passed for security in pre-Internet culture is exactly what allows total screw-ups like this to happen in these large organizations. The gold-plated consultants who are friends with the VP wind-up being far less competent than your average/decent developer.


There's probably a kernel of truth to your comment, but that last graf makes no sense. The "security specialists" aren't writing the code with vulnerable direct object references; the "average decent developers" are. It's true that there are consultants out there that don't improve the (poor) seaworthiness of typical developer output, but few of them are so bad as to make the situation worse.

And, the notion that "good" developers turn out uniformly secure code is delusive. We're lucky to have gotten to work with some of the best run, most carefully recruited teams in the industry. You find terrifying things on all these engagements.

My advice, again as a practitioner: if your security firm isn't finding terrifying things, at least on the first go-round with an app, ask very tough questions. Ask if they need an extra week or two to catch up and get real findings. Then ask which of the two of you should pay for those weeks.

We are still finding code execution on a good chunk of our web gigs, even at the "good" companies.


Just as a corroborating point, although I'm not a professional security developer I've been an interested bystander since I was a stupid, self-taught, grey hat 16-year-old 5 years ago.

Anyway, I once found myself writing some PHP code to demo a slightly complex SQL injection attack for the class I co-lecture at Northwestern (Network Security and Penetration). This code purposefully had a SQL injection vulnerability in it. It wasn't until the third reading of my own code that I noticed that I mistakenly dropped a CSRF vulnerability in alongside it. CSRF was literally the topic I was teaching next Monday and I put one into my own security code accidentally.

Secure code is so difficult to write that I can't believe that even the best developer writes secure code much of the time. Hell, apparently even I can't write secure PHP when I'm looking straight at it.


I worked at a company where we had an individual hired as a "security specialist" who wrote code and got more or less a free on his behavior and approach. I suppose that's different than a full security consultant.


Your pentest firm should not be writing code for you. That conflict of interest is pretty obvious. If I had a consultant doing that for my company, I'd retain a rival firm to review the code, and make sure both the consultant coder and the pentester team knew about each other.

If you told me someone at, say, iSec Partners (a firm I like) wrote something I was pentesting, I'd go f'ing nuts trying to find flaws in it. If you told me iSec was reviewing something I wrote, I'd stay up nights thinking of ways to shore it up.


In 2000(?) I employed this same 'hack' against Ameritech's online bill viewer, but couldn't get anyone's attention. I called several people at Ameritech, but couldn't get through to anyone who understood anything I was saying.

I tried to get ahold of newsmedia, but realized afterwards that the links I was sending did have a session timeout associated, so by the time a reporter clicked a link, they got nothing.

Finally, I managed to get in touch with someone at 'fuckameritech.net' (IIRC) - a consumer watchdog (I hesitate to say 'group' - I think it was just one guy) who said "I'll take care of it". He made some contacts - I think got it to a reporter in Chicago, and that afternoon Ameritech's online bill view and pay was taken down (a wednesday IIRC) and it wasn't brought up again until Monday.

The 'fix' was not much - they were now hashing the account number in some massively long (128 char?) ID instead of just your account number. But it was all still visible in the URL, which was the bigger problem to start with, because it encouraged 'hackers' like me to change my account number by one digit.

I suspect others had noticed this before, tried to contact citi, and couldn't get in touch with anyone who understood what the caller was saying.

Companies need separate 'web vulnerability' hotlines to call/contact to report issues like this - perhaps just hidden in the 'view source' - if you're good enough to find the info, you know what you're doing enough to report a problem. Too low a bar?


I imagine that publicizing a web vulnerability hotline would result in more trouble that it would solve. Normal people really don't understand computers. If you somehow give off the message that your system is not perfectly secure or bug-free they would get scared and run off to competitors who are just as bug-ridden but at least appear to be more secure.


Normal phone techs should know how to deal with such calls (even if they don't understand the exact problem) and have a line/department to forward the call too. That provides filtering for legit claims, and avoids advertising it and scaring people. Simple.


While I sort of see, marketing/wording of this could spin this as a positive (assuming it was even something that was publicized - perhaps hidden in the markup is good enough?).

We've popularized crime reporting as a social good - mcgruff the crime dog, etc. When will we start taking online safety and security with the same level of seriousness?

As much as I don't like the idea, being able to report issues to a state or federal agency might be a way to go.


Eh I dunno. Google does this with its Vulnerability Reward Program [1] and people seem to be fine with sharing almost all of their private data with Google from all their e-mails to their credit card numbers (Google checkout), etc., etc.

Also Facebook has a form for reporting vulns [2] and people are still happy to share their personal info there. I'm sure there are other companies that have "hotlines" but these are just a few I can think of.

I don't think having an avenue for responsible security bug disclosure gives anyone the impression that their data is unsafe.

[1] http://googleonlinesecurity.blogspot.com/2010/11/rewarding-w...

[2] https://www.facebook.com/help/contact.php?show_form=white_ha...



Aren't you acknowledging violation of the Facebook TOS by submitting that form?


I don't disagree with you, but this reminds me of the anti-seatbelt arguments from automakers before the law required them.


I agree that companies need an easy way for people to report potential problems or risks. One company I was with had no idea it was a botnet C&C until months later somebody looked up our WHOIS and sent an abuse e-mail. Some non-hacking-related forums had known for a while.


A security guy weighs in on it here:

http://idunno.org/archive/2011/06/14/citibank-hacked-ndash-d...

"This was not sophisticated or ingenious, as reported, this was boringly simple. ... OWASP has had Insecure Direct Object references on it’s Top 10 list for years. It’s in the SDL Threat Modeling tool. Any security firm worth its salt checks for this"

Yes, there's a good description of this kind of trivial "hack" in the Open Web Application Security Project Top 10: https://www.owasp.org/index.php/Top_10_2010-A4


Yeah. When that first report said 'experts' said how hard it would be to have predicted and prevented this, I choked. Amazingly lame.


Citi has an internal web application pen-testing group. My guess is they were only ever attacking the outward facing apps and not the ones after successful authentication. Even if so, they may have hundreds of apps to test which constantly change, and sometimes attackers just get lucky and find a hole as soon as it pops up.


My understanding, just to clarify: Citi has a web app pentesting operation, which engages and is among the largest customers for several well-known pentesting firms. Just in case the impression was that Citi has a couple guys in a room doing this stuff.

As for constantly-changing apps, let me speak against my own direct financial interests here (we have a product coming out that addresses that problem, so I'd like it to be a big one). Many, if not most, of the large financials we work with or have talked to have a fairly strict process for deploying new code, and the process gates on security review. Not deploying unreviewed code comes pretty close to being part of the due care standard at large modern banks. If I had to gamble on this, I'd bet that this specific code did get reviewed.


I can only think of three scenarios which would result in this breach (but I could just be lacking imagination):

1.) This app has been sitting around in production and was never tested.

2.) This app was part of the normal testing procedures (which usually means it's tested annually) and somehow this vulnerability was missed in every test.

3.) This vulnerability was not present the last time the application was tested, and somehow this version was deployed before it was signed off on.

I've been around too long in this industry to claim that scenario 1 or 2 are impossible, but knowing the particulars, they seem exceedingly unlikely.

That leaves me to think it was the third scenario, which is still abberant behavior on their part.

I feel bad when I hear about situations like this. As you mentioned in another comment, this is pretty much what we fear the most.


I don't know Citi at all, but at our fisrv customers I think (2) is more likely than (3) (neither is a mortal lock). I also think that this is a hazard of working with high-volume Big-4 type firms... but I want to tread lightly with that thought for obvious reasons.


No no, I absolutely agree with you (about the hazard). I worry about any company that puts all it's app test eggs into one large contract with a big firm (a statement which I'm sure would make one of my salespeople cringe). I find that the places who use a combination of multiple app testing companies in combination with their internal teams seem to fare much better.

For this specific vulnerability, I find it shocking that even the most rudimentary assessment wouldn't have caught it; but my own personal befuddlement might be biasing me against thinking that (2) is likely.


Ah. The guys I knew at Citi had a small team but i'm not sure if they were direct hire or via contract, but they basically dealt with the low-hanging web-app-pentest-fruit.

I don't work for a financial company but I work for one that works for/with them. Their code review is more like: "Hey, did you even test this code in QC? I can see a syntax error." I don't think anyone here actively looks for security problems during review. If it compiles and it's sat in dev/qc for a month (we just assume it's been tested), it's pushed out. I don't think anybody here would recognize XSS if it hit them in the face, and this particular bug ("allow any authenticated user to view any URI matching a given string") sounds suspiciously like a bad ACL rule in their identity/access management servers.

Edit: and the web app would have to not be rejecting access by an invalid user. I can see a single line's test being formatted in a weird way and this getting missed when somebody committed it - after all, if it doesn't cause failure, is there a bug?


From the article:

"this is a dead simple and common hack and Citi should have seen it and prevented against it. Seriously, this is kindergarten level stuff. Really, really stupid."


I'm kind of torn on this. On the one hand, yeah, it is a trivial flaw.

On the other hand, so is waving a gun at a teller. That attack has been around for decades and still works a few dozen times a year, because the cost/benefit analysis says that after hardening the banks a little it is easier to just lose a few tens of thousands of dollars every once in a while than it is to give them the Secret Service's attention to physical security.

That is hardly the only systemic vulnerability in the banking system. For example, let's suppose I want to compromise your account number and credentials sufficient to take you for every penny you possess. You know what I need? A check of yours. Any will do. Everything I need to create a demand draft against your account is on every check you have ever written. Every employee of every business you have ever paid by check got the keys to your financial kingdom.

You may not be aware of it, but since those credentials are assumed compromised, the security is in a) catching me when I use the demand draft to suspiciously drain your account and b) failing that, making you whole out of the bank's pocket. The numbers have been crunched: it is vastly, vastly more efficient to treat fraud as a cost of doing business than it is to tighten the screws 100%

The attack surface on software the size and complexity of a bank's is like the Death Star, except any single rivet being out of place will eventually result in this headline.

(Step #1 in tightening the screws would be turn off public facing websites, because inexpert users plus compromised machines means that no banking website will ever be secure, even without coding errors. This will never happen, because the provable cost savings of moving customers to online banking roflstomp over the marginal fraud risk.)


This isn't akin to waving a gun at a teller. This is akin to handing the teller a huge stack of withdraw forms with random account numbers on them, and the teller dutifully checking each one, ignoring the ones with invalid numbers (most of them), and handing over the account contents for all the valid ones. This doesn't work, because the teller acts as security: if something is amiss in the implied security checks, to wit there is no reason one person should be submitting requests regarding lots of account numbers esp. when most of them are invalid; she'll have alerted management & security within seconds of seeing the bizarre request.

Your subsequent analogy/justification/complaint is only valid if ONE doctored URL were used. My bank, Chase, does in fact implement security against such a "lots of random account numbers" attack: not only must the account match, but the MAC/IP address, browser/cookie, and other under-the-hood identifiers must line up; any mismatch between account number and access tools initiates emailing or texting a verification code to a known address/phone, which then must be submitted to close the loop of verification and, only then, allow access. Not running some kind of "one account per access device" sanity check is insane.

It's not about one rivet being out of place - such vulnerabilities are understandable. It's about having an uncovered vent lead straight to the reactor core - that's stupid.


> Not running some kind of "one account per access device" sanity check is insane

So I would need different accounts for my personal computer, my laptop, office computer and the computers of my parents? I regularly work on all of those.


It's a sanity check, not an absolute limitation. My bank DOES (as I detailed) require positive-feedback verification that any attempt to use more than "one account per access device" is in fact authorized by the account holder. Any time I use a new access device, they email/text to a known address/phone a verification code which I must feed back before the login proceeds. The sanity check is: if the account is being accessed from a device not used before, the legitimacy is suspect until confirmed.

This in contrast to the lead story, where some 200,000 accounts were accessed from a very small number of computers clearly not authorized by the account holders - achieved because not even a basic sanity check was performed. Heck, the servers didn't even notice that no login process was performed for the accounts, much less track which devices the account holders tended to use.


I think you're missing the point (or I'm inferring it incorrectly), that, put simply, there is a cost associated with implementing the security measure (both to build it and the inconvenience to customers) which in some cases outweighs the potential cost of the exploit.


Of course cost/benefit ratios should be considered. Not much point to debating that when it's obvious they weren't.

The lock on the front door was good, but the only thing that protected the bank until now was that nobody tried the windows & safes on the assumption that they would, in fact, be locked.

That's not weighing costs, that's being criminally negligent.


Like the Death Star...


The attack surface on software the size and complexity of a bank's is like the Death Star, except any single rivet being out of place will eventually result in this headline.

I agree that banking software is probably complicated. But really, this bug is a beginners mistake and it shoul dhave never happened.

It is something that the original developers should have known about and also something that the company auditing this code should have seen.

There is really no excuse for this.


This is like the first thing any new hax0r tries. Imagine the joy when it actually worked...


I've been doing software security since 1994, but spent 10 of those years as a dev, and so didn't do my first web pentest until '05. On my first ever web gig, I scored a login prompt with 'OR''=' SQLI in the password field. It's like the White Whale for me now; I haven't seen it again, but I know it's out there. Arrrrrhh.


I can understand and even respect a good pw SQL inject, but they MUST HAVE sat there with pins and needs, giddy, saying to themselves: "It CANT be THIS easy!".


I found exactly the same thing in the first week of my previous gig.

Your White Whale unfortunately pops it head up everywhere.


I was thinking the same thing--finally something my grandma and me could have hacked during some "grandma-grandson time".


seriously, it really is.


> The attack surface on software the size and complexity of a bank's is like the Death Star

Really? I'm sure there is more to it then I imagine but I've used the websites of 3 different banks and their functionality is quite limited, it looks like something you could do in a Rails app in a few weeks.

None of the real payment processing is done by the web app as far as I know, it is just a thin layer on top of a database with the transaction history and can store payment orders for batch processing later on. None of the real payment processing systems would face the internet I imagine, so I think the attack surface would be quite small.


There's also something to be said for having more deterrence than the next guy. Which is better business? Cutting ridiculous corners to save on developing decent security, and eventually getting egg on your face, to the chagrin of your customers, or simply keeping idiotic measures like this out of your software so it's one of your opponents' faces that are hit with the egg?


Running cost/benefit on this is great, except that the numbers still don't add up.

Instead of a user id that is obviously some row in a database, use a UUID and do your db lookups against that. Worst case you have to put in some kind of local UUID -> user id lookup table because you can't manage to change the user record database directly. Now, barring a broken UUID generation scheme, it's virtually impossible to crawl for user accounts. Plus you have a scheme for providing the same guarantee on any other identifiers.

Obviously that doesn't prevent the larger problem of "guessing your user id and munging a url lets me into your account without a login", but it at least makes the guessing part worlds harder for relatively little work.


On the one hand, yeah, it is a trivial flaw. On the other hand, so is waving a gun at a teller.

Not really. To make it so that waving a gun at a teller doesn't work you need to do lots and lots of complicated hard things.

However it's as hard to ensure that person with account A doesn't access account B.


Nonetheless, banks still lock their branches at night. That's about as basic as security gets.


How is this even possible? In my very first website I built from scratch using PHP, I paid attention to the possibility of this. I can't say for certain that I fully protected against it, but I tried. That little trick would not have worked.

How is it that a bank, of all places, pays money for a web infrastructure, and manages to employ people who don't even think about the most basic of attacks? I've been changing info in URLs since I started using the internet.


Diaspora had this problem when they first released their software, and it was written in Rails, a modern framework http://www.kalzumeus.com/2010/09/22/security-lessons-learned...


The framework has nothing to do with not authorizing requests properly. Diaspora had many problems when they first released because apparently they were all inexperienced programmers, like the guy who wrote that in Citi.


You cant really compare this with a bank like citygroup. Security is everything the should care about in the first place. They have to protect your money and nobody really cares about fancy features when they just want to wire some money, everything should just be rock solid.

I would love to get more information about this breach it sounds to simple to be true.


They have to protect your money and nobody really cares about fancy features when they just want to wire some money, everything should just be rock solid.

Those two desires are in direct competition. For example, I bank at Citibank in America precisely because their website will allow me to initiate a US to JPY international wire transfer without my physical presence in the US. That class of activities is just about the most dangerous thing a consumer-grade banking website could allow you to do. (International wire transfers are practically non-reversible. If you are induced to send one to a fraudster or they compromise your online account and send one on your behalf, and the bank doesn't catch on within a few seconds, you're pretty much screwed.)

Accordingly, many banks do not offer online international wire transfers and will laugh in the general direction of adding it to their feature lists, despite it being technically not rocket science.

The rock solid feature that lets me eat on a semi-regular basis is also an attack vector against almost every other HNer with a Citi account, most of whom will never send money overseas. So, what should Citi do? Optimize for security and shut down that feature from their website, or optimize for being able to just wire some money in a rock solid fashion?


> The rock solid feature that lets me eat on a semi-regular basis is also an attack vector against almost every other HNer with a Citi account, most of whom will never send money overseas. So, what should Citi do? Optimize for security and shut down that feature from their website, or optimize for being able to just wire some money in a rock solid fashion?

Perhaps require a setup process to enable the feature on an account-by-account basis. When you set up your Citi account you could have had another piece of paper work that you signed stating you know the risks of international wire transfers and that you authorize Citi to allow them to be processed through the website. Most customers wouldn't set it up and will have exactly the same experience as they do now, some wouldn't get scammed and customers like you also get to sit pretty.


My bank (Australian) requires SMS-based two-factor authentication on international transfers -- they send a one time code in an SMS to your registered phone number, and you have to enter that, as well as re-entering your bank website password, before the transaction will proceed. Works anywhere in the world as long as you have your phone with you.

You can change the registered phone number only by calling the bank and passing a set of identification questions posed by a human operator, so there has to be some significant identity theft to get past it. I don't think that would be particularly hard for a determined and experienced thief though.

I guess it hinges on what the meanings of "fancy features" and "everything should just be rock solid" are. I tend to agree with your earlier comment that for the banks this comes down to a risk assessment and a cost/benefit equation.


>> So, what should Citi do? Optimize for security and shut down that feature from their website, or optimize for being able to just wire some money in a rock solid fashion?

In practice what Citi does is neither of these absolute extremes; it just flags transactions over a certain amount (I believe it's generally $5K) and any transactions that the Citi systems deem suspicious. Anything in these categories yields a notification to the account holder, and the transfer has to be confirmed via phone before the funds are released. This is a variation of the "Are you sure you want to do this?" messagebox confirmation in programming.


Why not require a separate PIN code for certain transactions, like wire transfers? That wouldn't help against a keylogger, but it would help if the account was compromised in this way.


The 2-factor approach my bank takes (bit of detail at http://news.ycombinator.com/item?id=2634730), is what I consider a decent security/usability tradeoff.

When your phone has a NFC reader and your bank smartcard can talk to each other to handle it, even better. (Well, higher risk of intrusion because it's a multipurpose device, but way ahead in terms of usability)


"Security is everything the should care about in the first place."

profits are all they care about.

Honestly, stuff like this burns me up. Large orgs like this lobbied for crap like PCI compliance standards to 'protect' data, but they don't have to follow their own rules. Seriously, if PCI compliance is mandated for anyone who stores CC info, why the hell isn't citi being shut down for this sort of breach? 'too big to fail'?


Has it been determined that Citi will not face repercussions for this incident? I think part of the reason PCI DSS was created (by the payment processors MasterCard, Visa, Amex, etc) was to allow for more legal leverage against the banks when determining who has to pay damages in scenarios such as this.


No, it hasn't. I'm just jumping the gun in a frothy rage.

Something this egregious should have been caught by PCI compliance checks (if not development) in the first place though.

I doubt the penalty will be anything severe enough - something like "no cc processing and management for citi for 180 days" might make them take this a bit more seriously.


I think PCI is more of a reactive legal tool than it is an effective, proactive way to prevent security breaches. In theory it should catch security vulnerabilities, but I don't know how thorough the compliance checks really are...

Yeah, Citi's greater punishment will likely be in the form of a weakened reputation then it will be in actual damages paid.


They're long, involved and expensive, although I'm not sure how thorough or really preventative they may be - you're right. For companies just getting started, they probably serve more preventative purposes than they do for already established players who were around before the PCI stuff came around.


> You cant really compare this with a bank like citygroup.

Yes, yes you can.

Listen, PCI compliance is something people like to talk about, but you'd be surprised at the number of companies that don't follow even the bare minimum (unencrypted cards and keeping the CVV). We are talking about large, national corporations (and not Sony).

Besides, who do you think is writing this software? Normal people. There isn't a "Programming for Banks" degree you can get. It's programming. They hire contractors for months/years at a time, and then they are done with them.


How indeed.. How can someone be able to architect the system of accounts, connection info, etc. for a bank but not be aware of such things? And it's not even being lazier either. Sigh... To all you 13 year old kids on the road of learning web dev, this is what you're competing against!


You'll surely enjoy the quote from the linked article:

=======

The method is seemingly simple, but the fact that the thieves knew to focus on this particular vulnerability marks the Citigroup attack as especially ingenious, security experts said.

=======

Sorry but no, this isn't ingenious - it's really the basics!!!


Yeah, the NYTimes article really made it look like the hackers went into a lot of trouble to breach the bank's security.


Of course they did, if they made it look simple then everyone would freak out (or they should, and have mass exodus away from Citi). People are used to hackers finding occasional, difficult exploits.


Of course, the hackers were suffering from acute Alzheimer.


If this really was the "hack", you can be sure that Citi is has opened themselves up to a whole world of negligence lawsuits. This is the same as having a vault where any customer could walk in and just browse around the safe deposit boxes. Sure, it might be tough to be authenticated to get into the vault, but once you're there...

This is something that should cause the immediate dismissal of the CIO, but sadly, probably won't.



Consumerist isn't a Gawker property anymore. They are now owned by Consumers Union, who also publishes Consumer Reports.


I was actually disappointed by the NYT article. They interview security experts who call the attack "ingenious", "hard to prepare for" and performed by exploiting a vulnerability in a browser. This understates how incompetent the bank's website design is.


That was why I linked to this article rather than the NYT. The article linked to, while shorter, contains all of the relevant information and less completely random cluelessness.


The NYTimes story is the 2nd crime, because they're white washing Citicorp's responsibility. With all the hacking that's been going on recently, the NYT needs a full time cyber security reporter.


A naive question here:

Suppose I accidentally stumble upon a gaping security hole in my bank's online service (or any other online service for that matter).

Am I legally obliged to notify them of that security bug? Can I offer the bank my assistance, for hire, in solving the bug without it constituting blackmail? (i.e. I'd be happy to help you solve this at a $300/hr rate)


Legally obligated? no. It's not your system, it's not your problem in that sense.

Sure you can offer to fix it, but since you don't know squat about the system (save for a small flaw at the surface) and they have teams of developers who do, they won't be interested in paying you to fix it. Offering to explain the bug for a fee won't be blackmail unless you threaten to reveal the bug to others if they don't pay up.

Be a decent chap. Send 'em a nice letter explaining the problem. It's your bank, remember, and they're humans like you; work with your service providers to improve the service. Assume you're not the only one who knows about the problem, that someone who also knows isn't as nice as you, and it's YOUR bank balance that is at risk.


But it's a corporation, not humans like you. I'm sure they employ lots of decent people. I'm sure they employ lots of douchebags too. It's irrelevant. It's a corporation who's only motivation is to make money. It's not your neighbour.

If you like their service and feel like telling them then go for it. If you want to try to charge them, then go for it (no it doesn't make you evil to charge for a service). If they don't want to pay you, feel free to say nothing.

Now if it's a mom and pop shop down the street, then yes, please be a good neighbour and help them fix it (although you can still charge for it, but avoid douchebaggery like $300 / hour unless that's your usual rate).


Hmmm. How successful does the mom and pop have to be before you feel no obligation to tell them?


As a general rule for me it's once a company goes public.

At that point the company is subject to practically anonymous shareholders through many levels of abstraction. Voting is then done based purely on financials and often short term gains, which really puts a company at odds with their customers. So I have no customer loyalty to a company that's publicly traded.

Many private companies can still maintain my loyalty though based purely on their actions if the owner(s) / investors aren't completely disconnected from their customers. That's more of a case by case. The smaller they are, the more likely they care about their customers (there are always exceptions though).


When Mom and Pop work in a corporate office.


Of course then there is the saying "No good deed goes unpunished."


I've always got the impression that it's more convenient and less risky to not tell anyone; when the flaw is eventually exploited, you don't want to be the one on record as having known about it.


Or report it anonymously using Tor or a remailer. If you do it anonymously, you also get the benefit of being able to safely threaten the bank that you will release the information publicly if they haven't fixed it within x days.


Or you could just send them a real letter (I think an institution like a bank would respond better to that than an email). I'd just tell them that I'm a concerned customer who stumbled upon the flaw.

I would never threaten them to release it publicly though - I'd tell them that I'd forward the letter to the relevant government organisation. Where I am in Australia this would be something like the BFSO (Banking and Financial Services Ombudsman) and/or ACCC (Australian Consumer and Competition Commission).


How can a bank this large have such poorly designed security? It's ridiculous. Hopefully, all these latest hacks get everyone else to treat security more seriously. There could be a lot of other banks that do the same thing as Citigroup. So if one gets hacked, at least the others will remember to review their security policies, so it doesn't happen to them, too.


What's worse: every customers ID in the database was stored in the URL or that there was no ACL to test against? If a user is logged in, you have their account ID stored in a session. If they navigate to a page that their account ID can't see (like another person's account), then kick them out. Astoundingly simple.


Seriously how do people stay in their jobs allowing crap like this to happen? The CIO or CTO at Citi should get the boot. Until companies like this and Sony start making examples of people, this kind of sloppiness that gives our industry a bad name will continue.


Except the examples that will be made will not involve the CIO/CTO. And even if it did, the guy almost certainly a golden parachute - what does he care?

As long as it's cheaper to clean up after the debacle than prevent it in the first place, that's what people will opt for.


"And even if it did, the guy almost certainly a golden parachute - what does he care?"

If he really was fired and the company made a public stink about how much he screwed up, he may well care.

Many people at that level care about their prestige, and care about being shamed in front of their peers, even when they don't need to worry financially.


They probably didn't develop it in-house, so it's just a matter of blaming and changing providers.


In majority of corporations responsibility and accountability ends somewhere between VP and CEO.


I've been ranting on this stuff for a couple years now to my friends. There are some alarming trends.. First off, a pen test is often treated the same as an attorney client relationship. If the test turns up particularly costly bad news, I've seen a handful of testers have the relationship essentially severed, received some hard language about talking about it from a lawyer and then received a check from a private account as if the company doesn't want to leave any traces that they actually knew about the problems. (I'm not joking, some medium sized companies have done this)

With some of the regulations the big missing piece is openness, there is no transparency into it at all. Any audited company should say who audited them and then after some period of time, 180 days maybe, the audit should be made public. The business risk is that customers will leave, in many cases like Playstation Network, customers effectively can't leave, they've already invested in something and there isn't an alternative. In many other cases it's not typically going to be widely publicized. If the customers can't leave, en mass, there is no business pressure for security and without any transparency the regulations will simply be gamed.


I discovered that my bank, Banque Nationale, used GET to delete transactions from the History. Then somebody could send a mail to the bank clients with an image linked to this Get action and delete the transactions of the client if he was logged into the bank and reading his emails at the same time. It wasn't a big risk, but I don't understand how this went live. I mean if a bank could not get that POST is for C_UD and GET for _R__, then who?


I'd argue that this wasn't even a hack. This is publicly visible information. No security measures were circumvented.


This is like the first thing you learn in web app security (defender or attacker) and you don't even need to write script a tool such as http://code.google.com/p/fm-fsf/ will scrape the data quickly.

Even though it's insanely easy to spot and exploit it's also easy to miss it while coding. But any decent pen-tester will find it. Regardless, unacceptable for a finance company.


All you need is curl:

  curl http://example.com/user/[1-100]


No, I don't think easy to miss because it shouldn't be possible in the first place. At least for a banking software, the web application layer is not the right place to check for this kind of authentication. This makes me wonder how their code must look like...


I know exactly how this occurs because I recently met a "Senior Web Developer" at an established business who was basically their acting architect because he was the their first coder and therefor his non-technical bosses regarded him as some kind of genius because he knows how to unjam the office printer. He didn't know a lick of Unix, didn't understand load balancing, and had very weak SQL skills. He was your typical framework junky who couldn't imagine writing even the simplest web app without a framework to do all the heavy lifting. All he wanted was for me to recommend an even simpler web framework so he wouldn't have to write any SQL at all. No doubt some day his code will be generating headlines like this one, and he will no doubt blame whatever framework he used and his bosses will simply mandate that they switch to a more secure framework pronto, and they'll promote this boob as their Senior Architect to lead the project.


That actually sounds like the right way to do it to me. He shouldn't be writing frameworks. Frameworks are reviewed by a lot of people, and there is a far greater chance of them being bug free than something home-rolled.


I've met hundreds of these boobs in IT shops around the world. There's a huge skills gap between IT and software devs. IT guys are not given the time nor resources to learn and implement all this stuff. Also, they often don't have the skills or curiosity to figure it out themselves. So they really do need a framework that takes care of it all easily.


The big question is why the structure of the IT department lent itself to doing something so stupid

You can fire the CIO, you can replace the offshore developers with onshore, or vice versa, but experience says it won't matter.

I looked in amazement at googletesting's dependancy graph test suites yesterday, and realised that the playing field is not flat at all.

Reading and writing code is the literacy of the 21st C.

And the end most big companies are like newspapers owned and managed by illiterates.

It does not matter how you rearrange the strucutre or the hierarchy, when the chips are down decisions will be made on what the illiterate management understand is the best way to work. As such it is infinitely unlikely that the decision then will be set up to support what a literate person would decide.

Until a generation of coders grows up, or all illiterate companies go bankrupt, this will merely be one of a myriad of pathologies exhibited by large companies run by the illiterate.


I have really problams believe that this could be true. Not even a first year student would be that stupid to expose any user id in the URL. And read from it without access right checking. For access to the related account data. How would they even get the idea to do such a thing?

And as for the "hackers", I guess legally this was not even a break-in. At least in Germany, for legally being a break-in, a computer system must be "specially secured with the intention of preventing access". Well, this system wasn't.

...still, I have a hard time believing that it could be true.


There's nothing wrong with having a user id in a URL. That's very RESTful (my account is distinct from your account, so they really ought to have different URLs), and can make a lot of sense in cases where users may delegate permission to manage each others' records. It's just trusting that user id without an authorization check that's idiotic.


Its RESTful yes, but with the wrong item of data.

To identify a logged in user and give the user access to their private account data, ONLY EVER use a unique and temporal random string. Nothing else. Ever.

Storing that random string in the URL may be done but is more insecure, because it will remain in the browser history. Not good if the user in on a public PC. Better store it in a cookie.


"Who am I", "which users/accounts may I access", and "which user/account do I want to access right now" are different questions, and it's only the latter that belongs in the URL. I agree that secrets in particular must not be exposed in the URL.


I personally prefer NOT to use Primary Keys in URLs for this exact reason. At the very least you can generate unique and difficult to guess keys that correspond to those IDs. A record ID (user id for example) has no use outside of database joins and therefore is useless in the public domain (url). As well, for public data (obviously not bank data) a data-based-key (song name, restaurant title, whatever) is far more useful in debugging and in logs.

Otherwise - you're absolutely right allowing access to protected records - via id, key, or whatever - should absolutely be checked against the authenticated session upon every request.


Its a basic error. Tech Architect should be seeing this in milliseconds. Its a total design flaw. Should not be going straight to SQL with just paramters in a querystring, there should at least be authenticated user account verification checking.

Also doesn't say much for the company doing security review, its a basic check. Furthermore to not have a user/onwer id to join on there (no doubt sql back end) is shameful. I mean I can see it now :

select x from accounttable where accountnumber = @val

how about simply :

select x from accounttable where accountnumber = @accno and ownerid = @ownerid


What worries me the most is that one expert that the Daily Mail interviewed said "It would have been hard to prepare for this type of vulnerability." The same expert "wondered how the hackers could have known to breach security by focusing on the vulnerability in the browser."

http://www.dailymail.co.uk/news/article-2003393/How-Citigrou...


I've never worked for a bank, or any company that held sensitive information. I've only worked for companies that sold products to be used internally. Grains of salt are on the table to your left.

What this looks like, in the context of all the other serious recent breaches like Sony and the IMF, and from the point of view of someone who's never had to fight this particular battle but knows a little code, is that these corps deployed online apps in the early days when this wasn't a major part of their corporate face. Practices and points of view evolved from an initial environment where there just wasn't as much motivation for criminals to crack apps, because there wouldn't be that much of a market for what they stole. So corps could get away with deploying almost anything, relying on both security through obscurity and security through rarity (breaches were rare due to low profit). People in corporate offices that even knew their corps had these apps would be rare because the prestige of managing these people and apps would be low.

The apps we have today would then be direct descendants of the old insecure apps, and in many cases would be built directly on those old apps. Layers of mud, and you can't change the inside layers because old mud is brittle.

And now the corps are going up against, not people who are merely exploring or looking for bragging rights, but people working for criminal enterprises that, while not having the global scope of banks, are large enough and focused enough to directly challenge the technical power of the banks. And the banks are working with old, dry mud.

Again, grains of salt, but I suspect I'm in the right salt mine.


I have a feeling there's more to it than a clear account number reference in the URL. It was probably a base64 encoded account number or a non-salted hash of the account number (ie. rainbow table-reversible) and the quality assurance analysts probably never questioned this.

Disclaimer: I worked in product development making banking software and simple URL hacking was always a standard test.


``security experts'': The attack was "especially ingenious," and "would have been hard to prepare for."

The experts are certainly part of the problem.


I really hope that the reporter who wrote the NYT article either misunderstood the facts, or talked to an ignorant "expert".


I wouldn't necessarily blame this on the guy programming this. However, the person who spec'd the application up would be due for a quick demotion. The problem with antiquated bank systems is that the teller is trusted with the access to any account. So when it came to web enabling the old teller application, someone did some screen scraping as a prototype without having a concept of restricting access.

There is probably no concept of linking an authenticated account to a restricted set of bank accounts. Instead, they've probably wired it up to CICS directly to retrieve account details. This is why the Quick Fix appears to be obsfucating the account number in the URL.

Is there a public report anywhere? Aren't companies required to report all privacy breaches?


Historically, companies like Citi haven't faced any meaningful consequences for putting their customers at risk by not doing "security 101". Will things be any different this time?


The same bank also complains if you want to use more than 9 characters in your password.. That kindof hints about how they store passwords in their database..


Sadly, other financial services companies do not seem to understand, either.

Etrade's password system requires 6 to 32 characters with at least one number according to the password change instructions. They don't mention punctuation, but if you try to include some, they're deemed to be "invalid characters". Go figure.

This stuff makes me sad that I signed up for etrade:

"Thank you for your message regarding enabling HTTP Strict Transport Security. I have sent a request to our Product Development Team to have this feature added. Due to the high volume of requests, there is no guarantee that this will be implemented." -- etrade representative

"ETRADE does not allow certain characters to be used when establishing a password online. There is no specific ETRADE publication providing information on why certain characters are not allowed, such as special characters used for punctuation, etc. We appreciate your feedback concerning the online passwords. I forwarded this suggestion to the Product Development Team for future consideration and implementation. I can not guarantee when or if this change will be able to be made..." -- another etrade representative


It's funny that this wasn't discovered sooner. I suppose everyone figured a security flaw like this would never exist, and never bothered to try. Irony...


Most likely this was discovered sooner. But people kept the vulnerability to themselves and possibly even profited from it.

These kind of vulnerabilities in high-profile sites can go unreported for ages, even up to the point that they are common knowledge in certain groups.


That's worse than SQL injection. Didn't they build ACL?


ACL is probably part of the problem here. Most ACLs are very inflexible and are "opt in". They probably had ACL to block unregistered user from visiting the page, but it didn't deal with individual accounts.


> Think of it as a mansion with a high-tech security system -- but the front door wasn’t locked tight.

It's more like an apartment building with high-tech locks in the front door and apartment doors. But after you unlock the front door, you can unlock any apartment's door with your key! The keys are all identical, only the number printed on the label is different.


Did anyone else check if the article was meant to be satiric? This is unbelievable!


I was reading about this yesterday and about an hour later, Citi called me trying to sell me their fraud protection. I replied "Did it help the 200,000+ accounts that were stolen from you?" and hung up.


It might have--I mean, when Citi sells you fraud protection for your Citi account, that's basically insurance against either you or them being careless with your account info.


I wonder what Citibank is going to do about this now? Are they going to change their customers' account numbers?

That's the minimum that should be done for those accounts which were compromised.


I use citibank for my family accounts so may have been impacted. Has anyone posted a list of compromised account numbers someplace I can check against?


Is there anything Citi has done right over the past few years? I'm asking honestly and not with sarcasm, because I just haven't seen it.


I wonder where they got the account numbers from. Citi doesn't show them in the web interface, just last 4 digits.


How the hell could this happen? I think that 6yr old kid and figure out URL injection... wait a second... citi is a bank???


wouldn't a simple IF( $url_id !== $logged_in_id ) die("No access") fix this?


If they saved the $logged_in_id in the session then they wouldn't need to pass it as a parameter in the url. No, this probably indicates their entire architecture is ass-backwards.


So...the first login had to have valid credentials. Someone needed a citibank account to start the scraper-bot. Wonder if the fbi's talked with that guy yet.


I think the key phrase would be "Someone needed access to a citibank account..." If some Citibank account(s) was compromised in some way, such as phished, then a third party would have the necessary access. It would be bad enough to be a victim of phishing--imagine being a pawn in a large scale attack like that. It would suck.


I'd bet they used one of the hundreds of thousands of stolen accounts from other services to do this. You know somebody is sharing usernames/passwords with their SOE and Citi accounts.


[deleted]


For the record, enabling UUIDs in rails is straight forward. It's not the default though.

Why turn on UUIDs? They are not as guessable as normal IDs.


Depending what sort of UUID you're generating that could leak your MAC address to an attacker which could prove useful.


ghfdfsghjhgfdsfghgfds




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: