Hacker News new | past | comments | ask | show | jobs | submit login

The problem with AWS (and other cloud providers) is that it's nearly impossible to properly configure an environment because of how many different methods there are to gain access to resources.

Capital One has been all in on AWS and has dedicated an immense amount of time and money to developing systems for managing their AWS resources (Cloud Custodian for instance) and yet they still couldn't protect their data. What chance is there that anyone else could?




The whole point of moving to a cloud provider it allow the quick setup and deployment of new projects/products as well as trying to limit your costs. With that sort of open-ended system, unless everyone is always thinking security first and okay with the inevitable slow downs associated with a highly locked down system then you will more than likely always run the risk of this sort of situation.


Having everything locked down by default on AWS/Azure/GCP would go a long way to improving the security of the internet. Centralisation isn't healthy, but at least these companies could make a credible impact on data security by pushing the mentality.


All AWS APIs are deny-by-default. Only if a pertinent policy (IAM or resource policy) grants access is it allowed.

IME, the usual mistake many implementors make is that they inadvertently grant too many privileges and often to the wrong audience.


> The whole point of moving to a cloud provider it allow the quick setup and deployment of new projects/products

There is nothing approaching quick setup and deployment at large banks.

Not Citibank, but previously worked for a financial firm that sold a copy of it's back office fund administration stack. Large, on site deployment. It would take a month or two to make a simple DNS change so they could locate the services running on their internal network. The client was a US depository trust with trillions on deposit. No, I wont name any names. But getting our software installed and deployed was as much fun as extracting a tooth with a dull wood chisel and a mallet.

This is my experience with one very large bank, but from speaking with others that have worked for/with other large banks, their experience has largely echoed mine. They tend to be very risk averse with external IT products, such as deferring critical security updates because they can't be sure what it could break and also likely don't have end to end tests for critical systems that could cost a lot of money if the upgrade fails.

I know this first hand, because you dont always know or understand whats going on in 3rd party systems. I once screwed up a 3rd party system hosted on site. I was testing an upgrade on a dev server. Part of it invovled schema changes, and I had dbo rights on both production and development servers. The hidden part that I didn't realize is that the 3rd party tool stored DB settings in your Windows roaming profile. So, because we only had 1 Windows AD domain and no otherwise network separation, even though I was on a dev box, I was talking to the prod DB. Didnt even realize it (wasn't directly evident unless you dug deep into settinga) until I started getting calls from my users, complaining of errors. This was on the 3rd of July in the US. By the time I figured out the issue, it was about 3-4am on the 4th of July.

Had to make the call of rolling forward or back. But, the supplied installer was missing some packages, so couldn't complete the install. If we rolled back, an entire days worth of tedious work by a 10 person team would have been lost. Worse yet, the tool was used by traders in Europe who were about to start their day. Being early in the morning on a US holiday, I couldnt reach their support. Couldnt even get of their EU support. I was on the phobe with my boss, his boss and the head of back office at the wee hours of the morning on a holiday.

Decision was made to hold off on doing anything until we could talk to the vendor on the 5th. Ended up rolling forward and completing the install, but I was nearly shutting myself. We were handling somewhere around 25B USD notional in bank debt for several days (which caused huge issues in PNL - proffit and loss - reporting for several business days) that we coyld take no action on.

Thought for sure I was going to be fired. But, in the post mortem, I explained everything, and it was agreed that while I shared some blame, the totality of it wasn't my fault and that because I had diagnosised it and fixed it in the most timely manner I could, I was ok. IIRC, I think the only real remediation we took to prevent a similar mishap was to disable roaming profiles on the dev server and delete all existing profiles on the dev servers...


Yep, sounds like a bank to me. I worked at one of the big 4 for 6 years (way too long, I know) and the experience was horrible. It once took us a full year (no exaggeration) to get a single server allocated...and my group was actually one of the well funded teams


Funding wasn't a problem for the client in my story. They were happy to spend money. I think the initial contract was for X million USD that would have covered something like 5000 support hours on our end (was based on time spent, not per incident) and then after, it was like 300 USD per hour.

Separate project, I know I was billed out at 500 USD per hour 10 years ago. That was working with an exchange. Initially a joint venture, my company decided to divest itself. We sold all the source for the system that we developed and theyd be running to the exchange. We clearly documented our "build" process and requirements. The core part of the system (and as far as I know the only part that ever went live) was a Python app that used very specific modules, but we also had some patches that were submitted upstream, but not yet in public distributions. So, we were very explicit that you need exactly these versions of Python, these explicit versons of the libs and you need to apply our patches to the libs. We had also only developed and tested on a specific version of linux, and made the indication they should use the same, or we couldnt guarantee the software.

Well, we handed all of the source and documentation to the exchange. They, in turn, hired an outside consulting group. For the life of them, they could not get it to work. First question asked was: did you follow the instructions? Response was "of course, do you think we're idiots?"

The assertion that they followed the instructions exactly sent me down around a 3 week debugging session, attempting to reproduce the issues they were having in our office. Starting from scratch and the exact instructions I had written up for them (I was the only author of the Python app that was failing), I could not reproduce the issue.

After 3 weeks of back and forth, escalations on all sides and some thinly veiled accusations of sabotage, I went on site, sat down with the consultant, told him to start from scratch and show me what he'd been doing.

First thing I notice is that he installs the latest version of Python, and latest version of all the extra libs we needed. He'd completely ignored all of our instructions despite telling us the exact opposite!

It took all of 15 minutes to identify and correct the issue. Ended up billing close to 40K USD in support because the contractor didnt follow instructions and, well, lied (intentional or not) about having done so. Never heard a peep about it from management about the hours or questioning the resolution, and as far as I know the exchange paid the bill without question, even in the height of the aftermath of the 2008 crash.


I think AWS's use of synthetic reasoning in this space is groundbreaking and shows the way to go forward for complex systems in the future.

See also: https://aws.amazon.com/blogs/security/protect-sensitive-data...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: