Hacker News new | past | comments | ask | show | jobs | submit login
Principles for Building New SaaS Products on AWS (trek10.com)
212 points by jackgill on May 5, 2020 | hide | past | favorite | 90 comments



This sounds like an excellent guide to a perfect vendor lock-in. It's just missing the final principle:

#4 Operate as if you may be bought or cloned by Amazon at any time.


Unless your entirely Kubernetes (and even then there are some sticky points), using a cloud of any kind already has you "locked in" with some of the most basic things like IAM. If you are going into a cloud you may as well reap the benefits of the premium you're paying.


I wonder has anyone who actually goes through all of the lengths trying to “avoid vendor lock-in” actually sat down and done a realistic project plan of how much it would cost as far as man hours and the amount of regression tests that would be required.

Once you have any type of meaningful infrastructure you’re already not going to migrate without major pain between data migrations, networking, security and compliance audits, retraining, etc.

And if you are only using one of the major cloud providers as a glorified colo, you have the worse of both worlds - you’re paying more for infrastructure and your TCO is no lower because you’re still babysitting infrastructure.


I want to know who has actually accepted a cloud vendor's lockin...and then regretted it, had to change, and wished they'd done it up front. I have yet to meet them.


Even companies that have decided to start on a cloud vendor and then found product market fit, grew their in house competencies, and then decided to build out their own data centers didn’t express “regret” for starting on the cloud provider. At that point they were well capitalized enough to move off.

Even almighty Amazon took a decade to move off of Oracle. Concentrating on avoiding being tied to Oracle early on instead of building a business would have been silly.

Trying to avoid “vendor lock in” when you’re still trying to grow your business, when your focus should be trying to acquire customers, and raise revenue and profit is the ultimate premature optimization.


> Trying to avoid “vendor lock in” when you’re still trying to grow your business

it depends on your core business. Don't be "vendor locked-in" for any core business competencies. But do get "vendor locked-in" for non-core competencies. If you're core competency is merely just white-labelling another business's offering, then obviously it's a bad practise. But for things that are easily commoditized - like CRM and payroll, why not be "vendor locked-in", and save the cost of development, and enjoy the lower scaling cost?


Why not go open source increase development cost and enjoy the lowest scaling cost.

If the price per seat is 5.00 dollar per person and next month it could be 5000 because they require you to pay surge pricing getting locked on could kill your business quickly.


By surge do you mean due to increase in ones own usage or demand for provider's service?

Do any cloud providers have surge pricing? (I've not read to many SLAs myself.)


I’ve never heard of “surge pricing” or anything similar. Sure there is spot pricing where you can tell AWS you are willing to pay up to $x for compute and based on demand the cost of compute will be more. But you use that to save money for cases where you are running something that can benefit from more compute but only worth it at a certain price. You would never pay more than the published price.

AWS has never raised the price for a service since it existed.


not completely true, sometimes they raise prices by launching a next-gen version of the service with higher pricing, i.e. ELB vs ALB.


ELB is still available and still does the job.


AWS never discontinued services, they just removing them from the marketing website and/or making them harder to find.

I can still launch old instance types which are no longer listed. Same with SimpleDB still works.


That's what I mean. I recognize it can be an issue later on, and then you deal with it...I've yet to see the case where it was worth worrying about up front, even in hindsight.

I also haven't seen the reverse, the multi-cloud people who later on drop a cloud provider because of some reason, going "omg, thank goodness we didn't let ourselves be locked in to one, this saved so much time/money/effort".


Vendor lock-in shows up when you want to cut costs. For example, during a recession.

It basically a fixed cost imposed by the cloud providers.

I.e. when the economy contract, and yet AWS/Azure profits are growing, this is the pain of vendor locking.


And employee’s salary is not a fixed cost? Or are you going to ask your employees to do double duty because “we are family and we are in this together”?

How many companies that were on such thin margins between surviving and going out of business that cutting infrastructure spending by 10 or 20% was going to make a difference in them being able to survive?


Yes, they are. But this is outside the point.

Which is, vendor locking did not show itself, because the economy was good.


Companies have been “locked-in” to vendors since IBM in the 50s. There are still plenty of successful companies that have been dependent on IBM for decades and weathered plenty of recessions. That’s not to mention all of the companies in the modern era that are completely “locked in” to the Microsoft ecosystem and spend literally millions on Windows, Sql Server, Active Directory, Office 365, etc. and have also weathered recessions.

Any large company depends on literally dozens of vendors that they have tied their business process into where it would be a major pain to switch.

I know in the health care industry, a lot of health systems are so tightly tied into their EMR/EHR, leaving a something like AWS would be a breeze.


Could it be large companies have war chests to draw on?

Smaller companies often run leaner. Except maybe for big airlines that spent their surpluses on stock buybacks and executive pay increases.


The clouds are designed for elasticity. Cutting costs there would be easier than some colo/on-prem environment which has much more contract lock-in.


Did you consider that AWS profits might be growing because everyone and their mom and their grandma are sitting on the Internet now?


Their profits are growing...their pricing isn't. AWS has yet to increase the price of any resource; they've only ever lowered them.


Every now and then we have articles about somebody making some seemingly insignificant configuration mistake that ends up costing them 10s of thousands of dollars over a little bit of time.

There was something in the last month where somebody closed their company because such a mistake took them through many months worth of bootstrapping budget.

I don't know if that counts as regretting the lock-in, but I guess they regretted something about their cloud provider.


No, not understanding your vendor's pricing is a very separate issue. I'd imagine had they gone multi-cloud they'd regret it even more.


>>> I wonder has anyone who actually goes through all of the lengths trying to “avoid vendor lock-in” actually sat down and done a realistic project plan of how much it would cost as far as man hours and the amount of regression tests that would be required.

Did that for a web startup. Tens of services, 200 instances.

Estimated 1-2 weeks to move to another cloud provider (Google/Azure) and be back online for customers.


I also work for a startup. While we have fewer instances. We have a lot of data and any move would trigger audits for security and compliance. We would have to work with each of our customers (large businesses) and make sure they white list the new IP addresses (yes something as simple as depending on your vendor for an IP address can trigger lock-in).

100-200 VMs - especially if they are all based on the same few image is easy.


We also have audits, compliance and security to take care of, and at least IP whitelisting around payment integrations.

I am considering a disaster recovery scenario. Like AWS banned the account or all the resources were deleted. There is nothing to do but to start over from scratch, every single employee is onboard to help and get their bits of services back working. Audits is not a rebuild step, it's something to arrange much later.


That's true to varying degrees. Sure you're locked in when you use ALB, but it's not too hard to replace that with HAProxy. Same with RDS to Postgres or Fargate to just running an app on your own server.

In general, if you build apps with open source runtimes, use open source dbs, and avoid the proprietary data services cloud providers really stick you to, you can move around pretty easily. And you still reap most of the benefits.

The real problem with the special AWS services is you end up having to hire AWS ops people or expensive consultants to architect around them and run them for you. So it's proprietary AND eating up salaries.


What benefit do you get out of going to a cloud provider instead of a cheap VPS solution or a colo if you’re not using any of their managed services?


They have managed services that are relatively easy to replace and managed services that are entirely proprietary. Many of them map to reasonable OSS tools. ALB/RDS/Fargate and even just ec2 + vpcs are super powerful, and replacing them is a known quantity.

But stuff like SQS and (to a lesser extent) Lambda is really hard to replace because it's thoroughly baked into an application architecture.

As an example, we (fly.io) have a tool that'll hoist a Fargate app into our infrastructure and let you run it all over the world. We even have people tunneling back into their VPCs to access other AWS services. But that only works because we're both somewhat standard, Fargate takes a Docker image and runs it, Fly takes a Docker image and runs it, the app inside doesn't care about either of us.


So now what happens when you need to migrate a massive amount of data while keeping everything online? Let’s say you want to trigger some code to run when a file gets dropped on S3. Are you going to spend Developer time trying to come up with a bespoke solution or are you just going to trigger a lambda on S3? Are you going to host your own highly available queueing and messaging system and have to run them on EC2? Sure it’s “easy” to replace your entire networking infrastructure and run it on another provider, but how many man hours is that going to take?

Have you actually costed out how much a large migration would take?

As far as using Lambda for an API, you can literally add three or four lines of code and use “proxy integration” to deploy your entire Node/Express, Python/Flask/Django, C#/WebAPI app on lambda and without changing any code, deploy it anywhere else as you would any other API.

Here is a Node Express example.

https://github.com/awslabs/aws-serverless-express


Yeah I'm going to do all that because it's helpful to avoid AWS lock in, especially for a SaaS that wants to make any kind of margins. Companies do this all the time, and I don't think (for the most part) the proprietary AWS services add much value.


How many man hours are spent trying to avoid lock in and how much would the cost delta have to be between your current cloud provider and a new provider to make a migration make sense? How many fewer employees could you have if you depended on managed services? How much time could your employees spend on creating features that could help you acquire customers or get your current customers to give you more money?


I feel like you have this a little backwards. Surely AWS has done the work to show the savings from their services vs rolling your own. It's a little silly to just assume proprietary AWS services are an overall cost savings and ask for proof that it's not true.

And, most of what you're talking about doesn't really affect margins for a SaaS. Every AWS service hits margins, one time migration costs and even R&D time to build products does not. The marginal cost of using AWS underneath interesting features is very high.


People cost money. You can buy a lot of services on any of the cloud providers if you can save the fully allocated cost of one employee - say $180K. That’s just looking around in any major city in the US, not west coast salaries.

Every dollar you spend on R&D or migrations you have to consider whether that dollar could be better invested somewhere else and whether it adds business value to have the expertise in house or to outsource it.

Dropbox and Backblaze for instance decided that storage was a core competency. Dropbox moved away from AWS and backblaze knew from day one not to get on it.

On the other hand Netflix went the other way and is now AWS’s largest customer.


All of that == add four lines of code?

Here is the sum total of how much “work” you have to do.

  const awsServerlessExpress = require('aws-serverless-express')
  const app =require('./app')
  const server = awsServerlessExpress.createServer(app)

  exports.handler = (event, context) => {  awsServerlessExpress.proxy(server, event, context) }
How much time do you spend babysitting infrastructure and how much money is it making your company or saving your company? How many of your customers care about your valiant efforts at “avoiding lock in”?


It's not skipping managed services but using the managed services which are relatively interchangeable: e.g. if you use AWS RDS Postgres or MySQL, ALBs, ECS containers, etc. you get the security and ops benefits, along with being able to use tools like Terraform to manage it, but if you ever had to switch you haven't built semantics into your application which aren't provably available elsewhere.

If you build the app around e.g. Lambda and DynamoDB, in contrast, you're going to have a harder time both because you need to restructure the code but also verify that what you switched to doesn't have key differences in how things operate.


Lambdas are pretty easy to move away from in my experience. It's "just a function", and functions can be really easily moved into a shim for another FaaS or even dedicated instance.

Dynamo is tough. At the same time, it's really really good.


The main thing I think you hit with Lambda is if you’re heavily calling other services. The API contract is definitely defined in a manageable fashion.


> What benefit do you get out of going to a cloud provider instead of a cheap VPS solution or a colo if you’re not using any of their managed services?

It's impossible to use a cloud provider without using any of their managed services; if you are using EC2 or a similar IaaS as if it were just a basic VPS, you'd probably be better off (cost for the use) with a basic VPS, but you'd also probably be better off with Amazon LightSail, which more closely approximates a basic VPS service.


> if you are using EC2 or a similar IaaS as if it were just a basic VPS

What more is EC2 than a virtual machine? Sure you have it tied into IAM that's tied into firewalls and whatnot, but that's doesn't make it radically different from a VPS.


You can't not use any of the AWS services. The basics of AWS are VPC, EBS, S3. Unlimited instantly-available managed storage and networking.

A cheap VPS provider or a colo do not give you storage and networking. There is no cheap and no open source solutions to do any of that.


Disclaimer: I work at Gravitational.

We build an open source solution[1] to deploy autonomous Kubernetes clusters into on-prem or air-gapped environments but it's also useful for limiting cloud lock-in (even has its own "IAM" built in). Of course, you have to also limit your use of proprietary services (which definitely has its trade-offs) but might be worth poking around if you believe reducing lock-in is worth it.

[1] https://github.com/gravitational/gravity


Avoiding lock-in leads to a least-common-denominator approach to using the cloud where you will miss out on the advantages of managed services. There can be good reasons to buy into a particular cloud vendor and there are ways to be modular at the software level without wasting cycles making it compatible with multiple clouds.


That is fine as long as you can make money fast. Same applies to if you decide to go with azure, gcp, ibm cloud etc. when you grow from 1000 to 1m subscribers, what matters is if you can scale in hours or months.


What’s the average case that this happens? 1 in 10? 1 in 1000? 1 in 100k etc..

(Not baiting just curious)

Aws passed 1mm customers (businesses in 2015-2016)

If this is the case do we not likely run more risk having our landlord, best friend, VC partner, banker, vendor, first X hire... also steal our business/model and it be more likely than AWS?


I agree. Also I don't see the point for AWS to steal/clone a project unless it's some kind of AWS orchestration tool or similar.


I found myself nodding agreement until I reached the Cloud9 part... I think I can count on one hand the number of professional devs I know that use Cloud9 as their primary editor/IDE. Curious if I'm in a bubble on that front?


Author here. HN in a lot of cases has readership that I’d put in the segment of developers that should absolutely not be using Cloud9 daily.

I personally take using Cloud9 to the absolute extreme(https://www.trek10.com/blog/i-buy-a-new-work-machine-everyda...), having my Cloud9 env setup scripted and creating a new one every day/project. I don’t really recommend that approach for most folks. Anecdotally, it has paid off well when I left a Mac on a train and I was able to walk in an apple store grab a new one and lost minimal productivity for the day.

However, the flip side of all this is I regularly work with a lot of IT people that have underpowered machines, flaky / poor internet, crazy restrictions on their work machines that cause all sorts of problems with CLI / program installation, etc. I’ve found Cloud9 to be super liberating for those folks particularly with the parity of Cloud9 to AWS Lambda runtimes.


We do a lot of gpu server stuff and are victims of Apple vs Nvidia BD teams being broken. The remote solution we came to is VS Code's remote mode that tunnels over ssh (dir list, edits, git tracking, ...) yet maintains your normal native IDE responsiveness.

The 95% is still local, but makes remote more ok when ci + jupyter + quick vim isn't enough.


Thanks for the great content!


Appreciate you!


You probably aren't in a bubble, but the whole edit through the browser, cloud-native editors are really becoming interesting. The interesting part is more the integration of dev, testing and deployment into the editor in a way you just can't do without a lot of devops work with a traditional IDE.


I don't think you're in a bubble, but we've recently started evaluating Coder and have found the switch to a cloud-based IDE (especially when you add Progressive Web Apps to the mix for native keyboard shortcuts) has been extremely attractive. I'm finding myself more and more drawn to hosted IDEs where I don't need to worry about network performance or how the Docker VM on my Mac is eating up my battery life...


I use Cloud9 for 100% of my projects. I don’t have to worry about dependencies on my local machine or virtual environments or local resources or permissions or disk space. It stops charging for use when I stop using it so my average AWS bill is in the single digit dollars per month. I can use whatever machine I want because it’s just a dumb terminal.

I am a big fan, have been since before Amazon bought it. I’m shocked more vendors don’t have something similar.


Yes! Slowing down all software development seems like an overreaction to the risk of dev/prod environment differences. Good architecture design, modular components, and isolated unit tests should really minimize the number of times this will become a problem. I can't imagine giving up the productivity improvements of insert-fav-ide-here to address unforeseen or even hypothetical bugs.


I use it quite often, but as a tech-blogger, I'm an outlier.

I write about AWS products and it's quite nice to get an IDE preinstalled with AWS CLI tools with the push of a button.

For JS it's mostly an okay-ish IDE, not as good as VSCode, but okay.

For things like Rust or Reason it sucks quite much.


IME Cloud 9 is helpful for building applications where you need public endpoints (like if you're using OAuth) or in cases where you want a Linux environment but are on windows/OSX/chromeOS. But I would pass for everyday use.


I don’t use it as my primary editor/IDE, but it is convenient if I’m on my iPad. Setting up the environment and connecting it to your Git repo makes it very easy to debug in a pinch.

I wouldn’t be opposed to a cloud-based primary IDE (as long as it functioned offline), but I don’t think we’re there quite yet.

So it’s not my primary IDE, but it’s a solid backup.


I used it as my full-time IDE for a while when I only had a chromebook. I like what it offers but it just wasn't performant and responsive enough to live in it all day IMO.


Our startup used this setup in 2013 when I guess it was a much more hipster thing to do! We used Cloud9 connected via SSH to our sandbox AWS account, edited code in the cloud, restarted the Node server running dedicated to each developer's sandbox, and had a sandbox URL per dev (e.g., jake-dev.startup.co) to see the code update live.


"I'd ask most developers to start their day in Cloud9" -- hmm, no. Most developers I know have powerful computers (many of them are gamers), so it's wasteful not to maximize the ROI.

Also, not everyone has fast/reliable internet all the time :)


Author here. Similar reply to one I did below, but a significant amount of developers I work with in enterprise or corporate contexts don’t have this similar situation. Cloud9 can be fairly liberating for them short term, especially while learning the ropes of AWS.

The majority of HN readership I’d encourage to continue using their own tooling, you’ve got fast internet, unrestricted access and powerful equipment.

That all said, I default to Cloud9 these days just so I can bounce around machines and have a consistent dev environment when I need it. A lot of my daily job is meeting teams where they are and helping them be productive fast as possible so I need to stay semi-fluent in most operating systems.


Have you tried Linux Workspaces?

(Compared to Cloud9, I greatly prefer Workspaces, but still use Cloud9 on occasion for a few niche use cases)


Yep! They work great in many situations. However, Cloud9 is quite a bit more usable and stable on something like shaky/inconsistent airplane wifi. It’s also way less friction to setup and tear down 3 or 4 Cloud9 instances in a day compared to workspaces.

I treat Cloud9 like any other ephemeral editor process. Need a new editor window? Cloud9 project. Done for the day? Commit everything I care about. Tear it down.

That said, I frequently spin up Windows workspaces to test software or workflows if I’m writing a guide or content.


These get a little boutique, but if you want to attract and close large enterprise/regulated customers, I'd extend a few of yours:

- Build your application to suit many customers or one. Large customers love to have their instance run in a dedicated account.

- Open source your operations. Large customers love to see logs/operational activity from their environments (e.g. cc: cloudtrail/config/cloudwatch logs to customer, this assumes dedicated account)

- Open source your security. Be prepared to ship guardduty, config rules, etc to your customer (this assumes dedicated account).

Also, since we're focusing on AWS here

- Build your application to support hybrid cloud customers. Expose it through private link, VPC connections, transit gw, firehose, api gateways, VPC lambdas, whatever is appropriate for your architecture.

- Leverage IAM as much as possible for authentication/authorization. Not AWS-specific but implement SSO and assume customers will require numerous instances of your service when developing your user account model.

- Leverage KMS as much as possible to protect data at rest, and support customer CMKs.

There are more but I'll stop here.


Some really good advice here. Especially the "build as if you are going to sell at any moment" and "build as if you are going to open source at any moment".


Immutable Infrastructure is more important than Infrastructure as Code, fwiw.

The latter just means "it's in [version controlled] code". This has a variety of use cases, and it might mean you end up in a quagmire of complexity. It's become a cargo cult thing where I've seen people adopt horribly complex, fragile solutions over practical ones "because IaC".

The former is a principle that basically has no downside, and only improves operational integrity. Even if you're literally deploying everything by clicking in the Console, it's still massively more reliable, repeatable, and recoverable as immutable artifacts.

The next thing I'd recommend before investing heavily in IaC is auto-recovery. The most obvious example is Autoscaling Groups, but any health check combined with an automatic action such as restart or re-deploy can work. This works best with Immutable Infrastructure, and typically does not require IaC.


I'm interested in hearing opinions about this principle in the context of data engineering:

> This also means leaning heavily into all the service offerings and orchestration tooling that is afforded to you by your platform.

I've built a data lake and several ETL pipelines using AWS native services (Kinesis, Lambda, Athena). It works but it's a bit...fiddly. I spend a lot of time configuring these services and handling various failure modes. I've been wondering if I should be looking at third party vendors like Fivetran or Matillion for ETL.

Does anyone who's worked with AWS data engineering services have thoughts on the trade-off between AWS native services and third party vendors in this area?


Regarding the configuration and failure modes, I think something like CDK could be a great way to set all this up in a more familiar and readable way

https://aws.amazon.com/cdk/


I can strongly attest to snowflake. I regret AWS doesn't offer the same features without making us jump through the maze of services with which we can emulate the same concept.


Thanks for sharing, I've heard many good things about Snowflake. In the past I've seen them as more of a Redshift competitor (data warehouse, as opposed to a data lake) but if they can simplify data ingest then I am definitely interested.


They're fundamentally different only in the model of decoupling storage and compute completely, but in a far more simpler way than redshift spectrum I feel like. Some of their features like zero-copy-clone are just not possible in AWS and make it extremely simple to do pipeline management in way that (at least to me) makes the most sense.

It's also the most democratizable model I have seen - anyone who knows the slightest amount of SQL can be set up to explore the data in minutes.

The elephant in the room is that you need to use SQL. Their spark connecters are as of now useless, so you either have to go with DBT, some homebrew SQL stringing mess or something like sqlalchemy. We're currently developing some wrappers around sqlalchemy to make this a bit less painful, but it's still so worth it.


Thanks for this perspective. It really helps the noobs trying to build more than single user stuff.


What’s the right granularity to shard aws accounts? I haven’t gone down this road. Is it madness to consider this as a multi tenant mechanism vs the usual foreign keys in the database approach? Is applying cloudformation across thousands of accounts feasible?


Probably the most useful mechanism I have for determining this is “if this AWS account disappears, how screwed am I / can I recover.”

I tend to separate all of my projects/services, and each of those to environments.

A cold storage AWS account, audit and security (ship logs, config changes, etc), shared services to another account.

If dev account gets hacked, that sucks, but we can clear it out.

It prod gets hacked (and deleted!) that super sucks. But hopefully cold storage and audit accounts can help us out.

If some other services/projects account gets hacked, I don’t want to be worried about impact to unrelated projects.


Nice approach - for cold storage what do you mean exactly? Manually rsynced backups or something? Most aws services I’ve used that have backups built in I don’t recall having cross account writability.


RDS snapshot copying, EBS snapshot copies, S3 cross account bucket replication, etc. Write only with no entry points into that account from your other accounts. (Preferably its own locked down IAM role with MFA required)


Cool- stealing this, thanks! Do you do the backups w a scheduled lambda?


We also use test roles on top of separate dev/prod accounts. It has save us already a couple of times when somebody deleted all instances (by mistake), but the blast radius was kept small.


I feel like the harder part of that would be migrations on databases and application code deploys. CloudFormation to configure VPCs would probably work fine.

If you're going to do that, though, you may as well do CloudFormation into customer VPCs and let them do all the "paying Amazon".


Think about where you need fault boundaries (e.g. cellular, zonal, regional, global) and match your account structure to that if possible. In practice that usually means one account per cell/zone/region per environment (test/prod).


What's wrong with region + stage?


> For instance, AWS doesn't have anything quite as tuned to fast frontend search experiences like Algolia.

https://aws.amazon.com/kendra/ ?


From my understanding when I first read through Kendra info (albeit that was launch day) it is more enterprise knowledge-base search and not quite the same use case as Algolia.


I understood the same.


Not sure how much I can trust "AWS Gurus" about actually building SaaS products in a real world at a real company.

It's all good advice, but please provide me with the money, and resources necessary to do it all.


What a lame article. I don't find any utility in any of the principles expressed here. Also, in my experience (at AWS 2008-2014, before/after in the same industry), I can't recall instances of companies that either mentioned, or followed, these principles.

Specific critiques:

> #1 Build as if you may sell at any time

> ... it forces you to build with best practices and isolation.

The opposite. It gives you an incentive to postpone technical debt, you want to grow and be acquired at the expense of whoever is going to integrate your startup into $bigco later.

Side note: "AWS Organizations" to me is simply a way for AWS to try to cover for the poor design choices of the organizational structure of an AWS account, and the unnecessary complications related to billing and metering.

Try to understand the AWS bill of a sufficiently large company - you won't. The AWS rep won't. The AWS Solutions Architect won't.

Also, never heard of a company being acquired at a higher price because they had a "proper" AWS setup.

Ah, forgot this: if your acquirer is using MS Azure, good luck telling them that you are using Cloud9 or other AWS-specific stuff.

Let's continue...

> #2 Build as if you may open-source at any time

Yes. In an ideal world. In practice, almost nobody follows best practices. Because there's always some urgency that takes precedence. This is why companies like Accenture, PwC, Deloitte, etc, keep billing monstrous amount of money to help large companies "migrate" or "evolve" or "adapt" or whatever buzzword they use.

> #3 Build with a cloud-native mindset

> ... going outside of the platform should be an exception and something you do only when truly needed

> My thinking on serverless these days in order of consideration. > - If the platform has it, use it > - If the market has it, buy it

Serverless, really? A promising, cutting-edge technology, sure. You want to bet your startup on Lambda? Go ahead. Lambda has been around for ~6 years now (I even tried a super early version internally before it was released), and I still haven't seen a large company doing A LOT of development on Lambda. Most project using Lambda are small, confined, isolated projects and/or teams.

Cloud native is great, WHEN it makes sense. I don't like religion too much, and I don't like religious people either; the author seem to have taken his faith in AWS too seriously.


This is blogspam.


It isn't blogspam just because it doesn't interest you.


It's a "3 tips" promotional listicle article on a company's website. It's the very definition of blogspam. I am interested in the top which is why I clicked. It's blogspam.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: