I've used both and will gladly recommend AWS over Azure:
* Azure APIs, tools and services get deprecated often. As soon as 3rd party docs get good and the bad bugs get fixed it will get deprecated. AWS has its share of v2 APIs, but most of the fundamental services are the same for more than 10 years. The new services build on top of the old ones, instead of replacing them.
* MS reps will endlessly spam you and every colleague in your org to adopt their latest preview features (gasp! You are not using ML and AKS??); But if you end up trying them you will find that they are half baked and if you get any sort of incident they will ask you to rewrite your app to accommodate them, rather than fixing their own bugs/state.
* AWS has great 1st party documentation, and the stability means great 3rd party documentation gets created as well. Azure gets astroturfed "community" docs written by Microsoft employees. Community means that support and official pages link to them, yet Microsoft has no obligation to keep them up to date or take any responsibility of their contents.
* I admit I like Azure resource groups (a logical container of resources in an account, required for every resource and simpler than the AWS equivalent). However I don't really miss them when I follow best practices (billing alerts, separate accounts for environments, IAC, tags).
> Azure APIs, tools and services get deprecated often
My experience with "new" Microsoft is that they're still learning how to play nice with others.
Not in that they don't want to, but that they're objectively bad at it, because it's not something they're institutionally used to doing.
"Old" Microsoft was "We build what we want, how we want, at the pace we want, and we produce some very polished final docs, and you use what we built how we expect."
Continually iterating on externally facing services owned by a small team, and consumed in arbitrary ways by third parties, is a very different model than the above. We'll see if Microsoft can re-org to the challenge.
They’re the same asshats. With a thin layer of marketing veneer over the top, telemetry underneath all floating on a vat of poor testing, poor delivery and poor quality.
I had a defect open for 9 years on connect that affected 20,000 users. Couldn’t pay them to fix it even as a gold partner. Now they just abandon all their shit in GitHub instead and sell you a subscription to be served up ash and beer dregs while reminding you that they love Linux and open source while trying to get leverage and replace whole chunks of it with their bananas ecosystem.
Edit: must be lunch time at MSFT. Downvote flurry.
> Couldn’t pay them to fix it even as a gold partner.
People complain about the old MS, but it used to be if you paid enough things got fixed. Cough up enough money and you'd get a direct line to the lead developer of whatever product you were having an issue with.
When I first joined MS back in ~2006 I got to witness this first hand, a high value customer had a problem with some C++ code and MS provided a translator and put the customer in contact with the compiler team.
The issue with "metric driven development" is that if something doesn't move the needle, it isn't addressed. What the #s don't show is that the sum total of small annoyances lead to massive customer unhappiness.
> while reminding you that they love Linux and open source while trying to get leverage and replace whole chunks of it with their bananas ecosystem.
Many of the senior/principle devs at MS now grew up reading /. and hating the MS of the 90s. The culture there is massively different, both for better and for worse. 1990s MS had huge documentation teams and tested the living daylights out of all code before it hit customers. The massive doc teams were laid off during the '08 recession and AFAIK never hired back (at least not in the incredible # they used to be), and after Google/Facebook scared MS with "move fast and break things", MS got rid of their testers as well (as has the entire industry), so now the code isn't as reliable.
I think it's more that your comment is 95% salt and 5% substance an so isn't adding much to the conversation. More details like the connect issue and less "ash and dregs".
Well, to give just one example: zip deploys (code) to azure functions always return 200 OK, regardless of whether the deployment actually succeeded. To know if the deployment succeeded, you need to use a second API (the kudu mgmt api) to access the text logs, and scan the text logs for errors. Due to our CI infrastructure, we do multiple deploys per day, and I can tell you that most of our deployments require 2-3 attempts to deploy successfully. We have a whole salt mine of workarounds like that in our deployment infrastructure.
It's been that way for years. Every now and then, they pull another Microsoft on top of this. The latest that I remember is that even after successful deployment, the function would actually not get activated until you polled its existence using the Az cli tools. We hit that one because we used the Powershell Az module, and of course the cli tools and powershell exercise different code paths :/
Hi tremon. I work for the Azure Functions team. Sorry for the trouble. This should not happen. We should be giving you a proper response code when the deployment fails. It looks like you are using the sync version of the API but just double checking if you are using the sync or async version ? If you are using the async version then in that case we do return a 201 immediately and then you can use the status url to poll for the status. I would like to dig in and figure out what's happening here.
For the PowerShell Az issue as well - it would be good to know what command you were using/what you were trying to do. We dont support deployment of code using the Functions PowerShell Module (https://www.powershellgallery.com/packages/Az.Functions/4.0....). Did you deploy using AZ CLI and then using the PowerShell Module ?
This task always succeeds. Whether that is because it's using the async API and ignoring the status url or because of different reasons is beyond my control. All I can tell is that with regular occurrence, the wwwroot will contain the file 'FAILED TO INITIALIZE RUN FROM PACKAGE.txt' instead of the actual deployed code (web.config will show "503 Site Unavailable: Could not download zip").
Our retry code repeats the same push using Publish-AzWebApp, which is sync FAFAIK (and regularly fails the same way as the pipeline task). We don't use AZ CLI at all. As long as Microsoft keeps updating both the CLI and Powershell modules, we expect them both to be equally supported.
I am the PM working on Azure Functions Deployments. Thank you for sharing your experience. We are currently investing in improving our deployment flows via Azure DevOps and this is a known issue. There are two aspects here - one is to address the not so optimal user experience that sets an incorrect expectation of "success" and second to actually provide more visibility into errors that cause deployment failures. I have prioritized the former to go out as part of the next DevOps release and will be happy to help you with the latter if you can create a support ticket with more details so we can investigate further. Please feel free to email me with any additional feedback on sokulkar at microsoft.com. Again thank you for your valuable feedback and usage of Azure Functions.
Do you have any plans for “easy auth” (that’s what MS support called it despite it not being called that anywhere in the Azure UI or documentation) with azure functions or at least not allowing people to try to set up auth for azure functions when according to your internal support AD-B2C “easy auth” is not compatible?
BTW I just want to say that functions kinda work aside from the above issue, but having to write a boot-loader that handles auth, kinda defeats the purpose. Or is it just that AD/B2C is so bad even your support guys don’t know how to set it up? Tried this about 12 months ago so maybe it’s changed in that time but fuck me dead getting auth to work with functions was a pain in the ass and made me swear I’d never touch another azure project again. I don’t think the problem is with Azure Functions per say, every interaction with AD/B2C at every org I’ve worked for that used Azure has been a monumental disaster of poor documentation and excessive complexity. You need a true expert in auth to work with it and even then they will spend half their working life swearing. The auth offerings from Google and AWS just work.
AD/B2C is also the biggest problem with teams (I just can’t log in on my machine as I’ve too many accounts for it not to get in a tizzy). I can’t even play minecraft cross platform because of it.
Oof. I've come to despise Kudu for these kinds of reasons. We ended up ditching direct zip deployments and now do Docker images, which get pulled automatically using the webhook on the container registry. It is far more reliable and also far easier to integrate into a devops pipeline.
I will write up the 30 years of using MSFT stacks and OS up at some point. That’ll be 100% substance. But the salt will have to remain.
If you want to see the real MSFT check out the dotnet team handling of the customer demands to remove telemetry. This is on GitHub issues. Or was if they haven’t reorganised all the repos again thus backing up the other schizophrenic donkey comment.
I can add some salt on azure web apps. I routinely get "everything is wonderful" answers from their weird admin guis, for services that I observe cannot reach their database or even start up.
Unless I operate azure from the commandline, the ui is questionable.
>Edit: must be lunch time at MSFT. Downvote flurry.
>Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
>Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
AWS also actively work on reducing wastage on behalf of their customer and pass them back. Ther instance pricing have reduced some 30+ times I beleive.
My firm is investing quite a lot of money / time / effort in our Azure infrastructure for relatively large and complex models (financial models with up to 10k cores; nothing very fancy just lots math and lots of customers). Our engineers are capable, they seem happy with the stack and we actually get pretty solid support from Microsoft. We ran into several Azure-specific bugs in the last few years which get timely triage, quick fixes within hours and solutions within weeks. We’re probably overpaying and have been Microsoft-centric for the decade I’m with the firm, so the local relationship is probably excellent. All in all it’s a lot more affordable than the license fees for niche supplier software we were using before. Point being it’s relative cost and engineer satisfaction that matter more than the last few million in costs.
Part of our experience away from vendor specific tooling was building a team that could handle the transition. It started with perhaps 2 or 3 engineers doing PoC’s. Then an architect came on board with Azure experience. Things went from there. We’re now at about 40 engineers working for / with about 100 actuaries / quants. I know the lead architect has theoretical vendor neutrality in scope, but the aspect of successful teams working in the MS ‘package’ is alluring as well. We don’t need to squeeze the last drop - the projects are already successful both in financial and human terms. End of the day a business always has vendor risks. It’s about picking the cost-risk-reward structure that fits. All in all MS serves us well.
(We do have healthy laughs about the internal accountmanager for MS always chasing the latest and greatest. Virtual desktops, Azure ML and such. The lead architect mentioned luckily decides on the stack, not our sales reps.)
There is no "old" or "new" microsoft. Time and time again they've shown us they are still the same old thing. Company DNA is what it is. Uber also falls into this category.
Don't let yourself be fooled by the same trick repeatedly.
* Every time you open a new tab, the new tab loses auth status and needs to re-auth. When I opened tabs for a bunch of VMs I got locked out for hitting the auth endpoints too quickly. Auth happens with a nonce so if you open two tabs, completing auth on the first tab causes the 2nd tab auth to fail so you need to refresh and do it again.
* The UI doesn't list any info on VMs, you have to click on each vm to see the details. Going back loses your list filters, etc.
* If boot logs get too large they never load in the web UI, you have to dig up the storage path, manually dig through the blob storage UI to locate the file, download it, and look at it with a local editor.
* Unlike AWS, they decided it would be better if every resource had a composite ID - subscription, resource group, name. You need to carry these things around together throughout your code. In the Go library you need to pass them to separate calls as separate params, for other things you need to pass them formatted as a URL path (generating the path by hand).
* The Azure official Go library shells out to the Python CLI to do saml auth!
* The Go library goes 100% against Go paradigms. API calls return a future, which then needs _2_ more calls to wait for and get the result from (WaitForCompletionRef, Result -- rather than just blocking the goroutine), each with errors that need to be checked. One of the two calls seems to be a leaky abstraction working around the fact that Go didn't have generics.
* WaitForCompletionRef never returns for a VM if it doesn't boot. So if you have a multi stage boot it'll just hang there.
* Their Go library is the epitome of inconsistency. There's at least a handful of different auth methods, and they have like three or four simultaneous generations of library apis (in the same library) that are all incomplete. To this day, constant breaking changes, dep incompatibilities (with their own libraries, with newer/recent older versions of Go). Iterating collections was similarly painful, with multiple "Paginator" object styles each with unique idioms (and dozens of bugs due to not getting the idioms quite right -> skipping pages or ending iteration early, etc).
Half of it feels like someone said "We need to be different from AWS so it doesn't look like we're copying them" but since AWS was doing things the best way they had to do things poorly instead.
(edit: AWS has tons of issues and I don't want to pretend otherwise, but the comparison to Azure is night and day)
The spamming (and outright lying) from MS reps was pretty ridiculous. At a prior employer we couldn’t tell if our rep was clueless or unethical. We raised to three iterations of bosses but his leadership changed annually so he was never held to account. Finally a user revolt caused them to lose an $8mm deal. Even then they reassigned him rather than fire him.
I’m impressed that they’ve moved on from being a Windows and Office company but their culture hasn’t caught up.
Typical Microsoft. Using their shit is like riding a schizophrenic donkey. I have been burned tens of times over the years and stuck with deprecated products and frameworks that were the official supported way of doing stuff on their platforms.
Oh so much. We hear this from our users all the time.
Full disclosure, I work for a Cognito competitor, so I'm definitely talking my book here. But our users are not. Here's a summation.
* There is only one deployment model, a SaaS offering on AWS.
* The user pool and identity pool concepts can be difficult to grasp.
* The user interface presented to your customers is inflexible and hard to customize.
* You can run Cognito only in the geographies supported by AWS.
* Cognito pools are not multi-region. If the AWS region that your pool is in is unavailable, you have few options.
* It doesn’t support localization of messages or the user interface.
* You can’t backup or export all user data, notably password hashes.
* SAML accounts are expensive after you grow beyond the free tier.
* Possibly most concerning, Amazon Cognito has been relatively static and has received few recent improvements over the last few years. The console UX got an overhaul in 2021, though.
* The customizability of the user interface and workflows are lacking.
* Since you can't export your password hashes, if you move to a different provider, you have to reset user passwords or use a drip migration.
That said, I've heard rumors they are working on multi-region Cognito (see this vague tweet from an AWS employee: https://twitter.com/sarah_cecc/status/1486346455790985228 ) which would absolutely be a game changer. And they have a nice serverless model if you can get by with their functionality and deployment model.
Hm thanks. I suppose I wasn't so clear as to say "vs Azure AD", which I was really after. But that is also a useful summary for thinking more generally about auth! I guess Azure AD would at least have the same restrictions on deployment, but restricted to Azure regions instead (IIRC Azure AD B2C only has 4 regions globally, although I may be out of date.)
Ah, did you want to compare it to Azure AD or Azure AD B2C? As you allude to, they are different solutions aimed at different spaces (IAM vs CIAM).
I'm less familiar with Azure AD B2C than with Cognito, but from my research and tinkering, Azure AD B2C has similar limits on UX customizability (basically CSS was the only way to change the look and feel when I checked it out), but more flexibility around workflows, using custom policies: https://docs.microsoft.com/en-us/azure/active-directory-b2c/...
Azure AD B2C wins on pricing (no SAML surcharge, I believe and the per MAU beats Cognito's until you get to 10M users for p1), but, as you'd expect, is less straightforward.
As far as regions/availability, I couldn't find a straightforward answer. This: https://docs.microsoft.com/en-us/azure/active-directory-b2c/... indicates you can choose one of 4 places to store user data, but it isn't clear to me if there are multiple regions/data centers that apply if you choose, say, France. Cognito's availability story is much clearer.
#1 make no sense as a criticism of an AWS service, thats true (and the point) of all AWS services (excluding services on Outposts but that's not used casually).
Agreed that #1 is due to it being an AWS service. There are a number of issues with Cognito that are shared with other AWS services.
That doesn't necessarily disqualify Cognito for all situations. But certain auth providers (including my employer, but also many other solutions) can be deployed elsewhere (in other clouds, on-prem, etc).
Hard agree here. AZURE has been playing hard catchup with AWS in terms of feature parity, but scratch the surface of almost any of their services and you will discover shonky, bug ridden implementations (and many things seem to be in perpetual preview) that are more expensive than their AWS analogues.
This has been my observation too. AWS started with primitives needed to operate a cloud service. I remember the days of AWS where many features were available via API’s only, no Web Console. Then built higher level services on top of those as customer needs became clearer.
Using Azure always felt like that the inside development must be led by marketing team: “here are all the services AWS has, go build them as cheaply as possible from all the existing garbage we have, by yesterday!”
Counterpoint: They're both huge trash. They both have advantages over the other (the API is just better for Azure IMO), but those advantages pale when viewed through the lens of how bad both platforms are. If you can use anything else you should do so.
Given that, having a discussion about which is less bad and how isn't very useful. When viewed holistically, they are both bad.
Honestly, for my personal needs I just use the smaller hosts: Scaleway, Hetzner, maybe DigitalOcean or Vultr. Actually right now I'm using Time4VPS (a regional host) because of some nice yearly discounts. I choose Debian/Ubuntu LTS and install Docker or another OCI runtime and run containers inside of Docker Swarm or Kubernetes (K3s), with something like Portainer or Rancher for graphical management.
So in a way, I'm building my own PaaS solution (well, more like launching existing turnkey solutions with minimal to no customization), since I just don't care about getting vendor locked.
For example, managed databases are cool, but a PostgreSQL/MariaDB container is pretty close (for my needs) anyways. Fancy load balancers are awesome as well, but most of the time I need little more than Caddy/Nginx/Apache to act as my ingress with Let's Encrypt (DNS-01 challenge or cert directory shared over NFS or soemthing). Updates? Just bump a tag after backup, validate if works, restore if doesn't. Crashes? Automatic health checks and restarts, or maybe notifications to my Mattermost instance/e-mail if I'm feeling fancy.
And if I want to migrate elsewhere, I just carry over the bind mount directory, launch the containers with same config there and update DNS records, monitoring, backup config. That's it.
That said, self-hosting most of your stuff is great, but lots of people rightfully don't care and don't want to bother gaining that skillset, which is perfectly fine. Use whatever works for you, even managed Kubernetes is a good step to avoid vendor lock if you use containers. Or, you know, use whatever your corporate dayjob setting mandates.
> if you get any sort of incident they will ask you to rewrite your app to accommodate them, rather than fixing their own bugs/state.
As it happens I recently ran into this exact situation with AWS. My team hit a surprisingly low undocumented scale limitation with one of their services. The error message stated that the limit could be increased on request, so I did exactly that. Rather than simply increasing it, they delayed and kept asking to setup a call to talk about how my team could rearchitect our code to work around their limitation. I told them no thanks, finding a way around the limit was not the issue, having to burn time implementing it on our end was.
Overall though I do agree with your point. AWS is generally pretty good about handling scale and not breaking older APIs, but there are exceptions.
Resource groups seems awesome, and are nice in some ways, but they are sort of this slippery slope that encourage you to put more into a single subscription than you probably should.
I mean, there is no right and wrong answer I suppose. You can have all the code for your project in one file. Forget a class per file or whatever, maybe it works for you.
What I saw is that the more you co-locate in an AWS account or Azure subscription, the higher the bar goes on ensuring your cost reporting and access control are also completely aware of resource group boundaries.
You really want to know is that $ for project A or B?
You really want to ensure folks on project A have access to their cloud resources, but not project B.
You can do that even in AWS, combine them and use IAM to segment things.
I think there is just more room for error if you combine.
Also I think Azure has some per subscription quota limits and you may not want team A and B competing for them.
Again, no right answer, but I know we've done projects to split accounts and have never done one to combine them..
I’m quite sure all the scenarios you mentioned can be managed leveraging resource groups and AAD settings. OTOH I can agree that for big engineering groups managing settings can become a challenge. IMO it’s not a matter of correct config, but about a threshold over which a single sub takes more effort to maintain.
yeah that’s a good reason, though it sounded like one should never do that. I believe the majority of small and medium enterprises are ok with a single bill, so putting all services in one subscription is fine. Logical organization can be done via resource groups.
This is a general problem with Microsoft and not just Azure. You can’t rely on anything new that Microsoft has released in the last 1 to 2 years. And especially anything that they are heavily promoting. They naively seems to think that rewriting something that ALREADY WORKS with a half-baked unproven new technology is a good way to invest developer salary $. How naive is that? Remember Silverlake? Remember Microsoft trying to pretend that C++ is dead for years and being forced to back flip because strangely companies out there did not rewrite their proven 10 million line C++ applications in C#? Microsoft in general has zero clue on what is important to the companies using Microsoft tech. What matters to mature developers and companies is stability. Being able to rely on a tech for decades without having to throw out millions of $ to rewrite everything using the latest wet dream tech generated by Microsoft.
>Azure APIs, tools and services get deprecated often
This is one of the most shocking things with "new" Microsoft, Microsoft always had the best backwards compatibility, now with Azure and Office 365 is seems they do not give 2 shits about backwards compatibility and breaking changes
Every day there seems to be a new breaking change or some API, Powershell Module, Sync Application, or service that is being depreciated often with limited or no drop in replacements
Have workflows, automation or systems dependent on the things.... who cares move to the new thing
We are having a combined issue very similar to this, and other comments in this thread.
We have been using Azure for 5+ years, we are quite heavily invested. Our payment card expired and so we decided (as good customers) to update our card, only Microsoft's billing will not accept it because it will not go over to the embedded payment gateway page.
We provided screen share sessions to their "manager", we provided HAR files from Edge to support "engineering" diagnose the problem, and we provided proof that it is their own servers responding with 500 Internal Server Error messages and 403 Forbidden errors.
The Microsoft support agent admitted that there is a fault with their system. They admitted that they need to fix it.
We keep asking for the issue to be escalated - it never is. We keep asking for the issue to be treated as a complaint - it never is. The agent simply will not do the things we ask.
Communication is like getting blood out of a stone. Once a week if we push out two or three reminders, always out of hours even though they repeatedly ask us what ours suit us. They even schedule calls and don't call.
2 months later, they tell us that 35% of Microsoft customers with our bank have this issue and they're not going to do anything about it, so uh, I guess we're up the creek with this? Thanks Microsoft, you're letting down our 70,000ish customers and we're not going to take the blame for it - we'll be honest with them.
And so we became AWS customers. Zero problems with payment. Zero problems with our Linux services migrating across. The only issue for us is a technical debt issue - migrating old .NET web apps.
The whole affair has really soured my personal and our customers opinion of Microsoft.
Unfortunately, one common approach with clouds these days is "unless you're big enough or in good relation with us and advertise us to others at every opportunity (or otherwise align with our vision where we want to be), we won't do shit for you, because it doesn't affect our bottom line".
In other words, after the initial years where clouds were startup-friendly, it became very enterprisey very fast.
Unfortunately, this is what I've heard happened to a medium-sized org at least. It's less about the support itself, but more so on the strategic/partnership side.
But I understand, such things are expected and change over time, so this is not set in stone and depends on the current climate at any org/cloud.
I know there is a reason why I keep two separate Microsoft accounts - one for the Office stuff and one for my Xbox - because I'm absolutely sure that Microsoft will eventually mess something up one way or another..
But to be fair: Five or so years ago I couldn't change any personal data in my Playstation account for over two years! Sony wasn't able to perform a simple CRUD operation on its database - and the error message went basically nowhere.
Having used both... Azure pricing is like a used car dealership. Would you like groups with your active directory, that's an up charge. Every service has a tiered pricing model which may be cheaper but requires a ton of cognitive overhead.
Now Azure presents it self as a much more cohesive platform that was designed with thought and care in how it worked with all the other tools.
AWS feels like 700 two pizza teams designing a cloud all in their own black boxes.
If you are using Microsoft workloads, Azure all the way but I would be hard pressed to move otherwise.
Clearly. Things that seem super obvious and intuitive for an end customer require one to go through multiple hoops to get done. At this point even AWS seem to have give up trying to look at big picture and provide higher level end to end services. They are happy to roll out lower level primitives.
I guess there’s a good amount of money to be made by building end to end solutions on top of AWS services.
tl;dr: AWS teams seem to mostly build technology and building blocks, Azure seems to mostly be a continuation of business products and Microsoft-to-Business contracts that happen to be tech-related.
It is after all not a "Datacenter emulation as a service" like Microsoft Azure is. There are a handful of 'enduser' SaaS services from AWS, but most of it, as you describe, are primitives used to build other things.
This is also where AWS originated from: just an internal system that was built to supply object storage and compute to internal teams. And when you start supplying commodity systems as a service it's just a matter of adopting and adding new services as they become more generic and more commodified. (RDS, container-based compute, FaaS, Load Balancing to name a few very common things that really are the same everywhere)
A lot of people are used to specific services like GitHub where you pay for Git hosting, or Salesforce where you pay to get CRM, but those are not building blocks, just like Azure is mostly just "things that windows servers in datacenters used to do" in varying degrees of being managed. You can get a Windows VM and run a MSSQL instance on it, or you can get an MSSQL instance, or you can get a shared SQL partition, it's all the same service but with different contractual obligations, and that is exactly what Microsoft likes to do. That top-down difference is what feeds al the other differences between Azure and AWS (and GCP).
This is not to say that they don't have Infrastructure building blocks, or that they don't have anything non-traditional, but it's mostly just pandering to the existing customer base on a business level, and not on a technical level. Same reason some people are stuck with MS Teams, not because it's good, but it was part of the MS contract that was essentially grandfathered in from legacy on-prem Office and Exchange to hosted versions to Microsoft 365. They mostly got people that way because of their failure on the intermediate step (Exchange Online from MS sucked), by simply hosting Sharepoint and Outlook for you, integrating everything else on top of that.
A lot of hand-holdy services and frameworks might save time for the first MVP product, it then invariably forces you to do insane opinionated dance to work with it any further.
> AWS feels like 700 two pizza teams designing a cloud all in their own black boxes.
Couldn't agree with this point more, and it's enough to make me prefer Azure. AWS is taking open source tooling and slapping a very thin veneer on top to make it their own. Almost nothing is cohesive or easily integrated. You can ask the same question to three AWS solution architects and get three similar-but-different stacks suggested.
Azure seems to be going the way of solid, simplified, integrations between their tooling (things like Synapse), while AWS is trying to be first to market (or fastest, or cheapest) with all of these individual components.
Yes, the number of comparable & overlapping offerings in AWS is very confusing to someone not fully immersed in it for years and years.
There isn't exactly clear comparison matrices or design trees on why you would use one service over the other. It really just feels like 100s of different services that were built for different specific end users and then slowly grown into overlapping offerings.
It's worse in big corporate, and especially financial settings as only certain flavors of certain services will get the cyber/infosec blessing. Then you have vendor products you want to use in AWS which only support certain flavors of those same service types as well.
So we end up having to bang heads against walls to actually get internal cyber&external vendor onto same page. If the product touches several service types (containers/storage/database/etc) then you have to make sure they can all be strung together in an approved compatible fashion.
In the old days a vendor could say they support x86 Linux, and you knew you'd more likely than not be able to install their software. Now you have to go many layers deeper than "we support AWS" to understand if its actually going to work or not, sometimes with multi-week POCs.
I have experience with Google Cloud and AWS is overwhelming for me: it is like going to a buffet with 80 varieties of crab rangoons or something - it was really tough to make a choice. I was just trying to enable alerts on a container-hosted service and it took me quite a bit to get there and even then the result is not very nice. Probably my inexperience though.
Yeah the tiered pricing makes Azure super complex. Azure App Services is crazy. The problem there is there are technical differences too, so it's very hard to sort out what you need. Storage is more complex too. While AWS has lots of challenges from those 700 teams, each services is simpler.
Sounds like all microsoft licensing. Its ridiculously complicated and even the dedicated people hired to handle JUST that for you will often be confused or not up to date.
In almost 30 years of dealing professionally with MS I have never once been able to get an answer to any licensing question, without the MS rep needing to consulting with others and get back to me.
I’ve seen a few orgs who elected to use Azure because they don’t want to give revenue to a competitor, leaving technologists to “make it work.” Who makes technology choices in an org is an important question to ask when interviewing (which C level
role, CTO vs CIO vs CFO etc).
I have no preference either way, everyone’s money is green.
At the level of contacts that these orgs sellers have with enterprises (CxO), I can see why that would be a factor. Microsoft is not in tbe habit of buying pharmacies, airlines and grocery chains. Amazon would make many execs nervous on that ground alone.
Yes. And they turn off existing customers after they buy tech building blocks. But Azure will struggle if they can’t get a stronger marketing message than “We aren’t Amazon”
Our account rep recommended that we change regions from westus1 to westus2 because westus1 was legacy and that's why we were experiencing so many edge cases and bugs (lots of them). That was a year ago.
Today we can't deploy any new postgres servers in westUS2 region because they claim capacity issues, but will let you deploy vms that use the same instance type. If we had to recreate a server we can't even do that! (Azure support will have us recreate instances if there's a problem)
They tell us to use a new region and have no ETA on when they will have more "capacity". Their answer is to force us into a new region and pay out of our pocket for region bandwidth charges between them.
The only reason we are even trying to create another postgres server is because they are deprecating their original postgres service for a new one (with no clear migration path that actually works).
This is just my rant for the week about Azure as a platform, I have many gripes about how they handle things as a whole, not to mention the circular support that ends in us dealing around problems.
> Azure support will have us recreate instances if there's a problem
Azure's network keeps randomly partitioning itself every few months, a bunch of VMs will be unable to talk in the same AZ with the same security policies etc, no changes at all for months. And yeah, they just tell us to recreate the instances which is a huge pain with Nomad. No one has ever shown any interest in figuring out what causes this, despite the fact that it's been happening for years.
Interesting you brought up the postgres issue. We are facing the same right now in Germany West Central region and the support's response is "consider deployment in other region".
I have used both AWS and Azure, and ended up settling with Azure. The services all seem to hang together much more cogently. The service ecosystem also seems to move slower and feels less experimental, which is a positive when I am faced with supporting enterprise solutions.
Regarding support, I have found Azure support to be very good, albeit I subscribe to paid support. For those who claim AWS support is better than Azure support, are you paying for a service plan or are you referring to free tier support?
I agree. I used Azure at my last 2 companies and joined the company I currently work for right before it was decided to do a total rewrite and use AWS instead of Azure. I'll be looking for another Azure gig if/when I leave my current position.
AWS...works. But everything feels very primitive and few things work together coherently. They expect you to tie just about everything together with Lambdas instead of making various services work together well.
And some of their services are just a complete joke compared to Azure. Azure Data Factory is probably the greatest ETL tool I've ever had the pleasure of working with. Glue...does do ETL, I suppose. Event Hubs and Stream Analytics feels much nicer and better integrated than the equivalent Kinesis offerings. AWS has even deprecated their "legacy" SQL Kinesis analyzers. Now you have to write lambdas to do it.
In general I just think there's better overarching technical vision at Azure, whereas at AWS it's hundreds of independent teams that aren't coordinated well at all.
Pretty much my same experience with AWS. We had a customer with some infrastructure on Azure that we had to integrate with. The hoops we've gone through are:
1) Wade through poor documentation to find how to do it, follow the docs to a letter. Even find a perfect example of what we are trying to do that another company did.
2) Get vague error messages that basically only say, "You can't do this." What? Why? Alternatives? None of that.
3) Contact support. Tell them we are trying to do X and give the other example.
Get kicked to a team in the Philippines. After a day or two of conversation, don't hear from them for a week.
4) Get told we were placed in the wrong support team. Get kicked to another team in another country.
5) Repeat 3 and 4 but then this time told we are being kicked to the real Azure team in Redmond.
6) Finally make progress. But very little communication, and certainly nothing like what went wrong, what we should have done, etc.
We are waiting for them to take a final step before we can even test our integration. It's been three months since this started.
Had a similar experience when integrating with Microsoft Teams. The integration worked OK for 99% users, but for 1% users we were getting some vague "not supported" errors without explaining anything (what's wrong or how to fix it), sometimes just 500's. Nothing in the documentation about the errors. Filed a ticket, the support guy gave a few suggestions, but it was clear he didn't really understand the documentation himself, I explained why his reading of the documentation was demonstrably wrong and had nothing to do with my problem, he agreed and promised to escalate to the actual dev team. Haven't heard from them since then (it was several months ago). I've had several such interactions with MS which led to nowhere. Meanwhile a competitor (we integrated with, too) used to solve all our problems in 1-3 work days.
Azure has a pretty terrible product and user experience, but what may be even worse is the support. I pretty often try to report bugs that I encounter, but they ignore those and point to unhelpful documents that I've already seen. Usually, if I have a real problem, I will have to find the solution myself. One of the last support requests I raised was assigned to someone who went on a week long vacation the next day. This is what happens when the purchasing decisions are divorced from those who actually use the product - no incentive to improve on the Microsoft side (for basically all their products).
> This is what happens when the purchasing decisions are divorced from those who actually use the product - no incentive to improve on the Microsoft side (for basically all their products).
If there was a way to sum up Microsoft's entire business model, this is it. Azure is a major player because they know how to sell to the C-suite.
> This is what happens when the purchasing decisions are divorced from those who actually use the product - no incentive to improve on the Microsoft side (for basically all their products).
Microsoft is one of the largest consumers of Azure services. I don't think it's fair to say they are divorced from the users of their services.
I think it's fair. Talking with people that actually use Azure from inside Microsoft, they claim that outside clients might even get a better experience then they do, bad as it is.
After using both Azure and AWS extensively for the last 10 years or so, I'm honestly confused at how some people fall so heavily on the AWS side. Azure UX is clearly superior and far more user friendly as the AWS console is a travesty. The services are much better integrated in Azure and Azure has substantially better PaaS services. It seems people prefer the cloud provider they cut their teeth on and anything else feels awkward and unwieldy. Throw the anti-MS bias prevalent in tech on top, and I guess here we are.
I've only ever really used Azure, so I would say I'm more on the human being side. Cloud platforms are large, complex things but that doesn't mean they have to be terrible products that people dread using.
It needs it. "Here are 10 ways to think about reservations in Azure" vs "GCP auto-applies a discount if you use a resource for more than 80% of a month."
The thing is after spending a decade on AWS I'm hesitant to move to a different cloud. Like you are going to have to pay me a lot of money or give extremely generous credit and even then its a maybe.
The greatest trick AWS pulled was convincing the world cloud needs to be complex and they succeeded. We have spent so much resources and time on it, its too painful to leave it.
The only exception is some Google products like Firebase which clearly shine above the pile of burning trash that is Cognito. Perhaps some other products AI related I don't know about.
Azure? Unless you are running .net from Visual Studios don't see why you need to be on it. Many generation of developers grew up not knowing what .net or Java is.
I was tempted into running on GCP for some new projects and my experience completely turned me off to the platform. I ended up with a bug where the Google Webmaster Tools Domain Verification wasn't being picked up by GCP, but the only way for the service to issue an SSL certificate was via that domain verification. The domain verification itself worked fine, it was just that GCP couldn't read it.
I mentioned this on twitter and a PM for the project lectured me about how I shouldn't use preview services and expect them to work and suggested I use a different service- one that was also in preview. That was my last attempt at using GCP.
> The greatest trick AWS pulled was convincing the world cloud needs to be complex and they succeeded. We have spent so much resources and time on it, its too painful to leave it.
This bit made me wince, it's so true. "Concorde syndrome"[0], anyone?
Same thing with Kubernetes and every other bit of enterprise software ever written. It's all rotten to the core with the same principle, even when the initial motives might've been pure.
Firebase just feels like the ultimate vendor lock-in though. I've tried really hard to minimize lock in... mostly to things that are easy enough to change/migrate... but RDBMS always PostgreSQL (pretty much all support it, can self host, cockroach and others as potential scaling options depending on how you use it).
I really like the Azure Storage (Tables, Blobs, Queues) myself.. for small-mid usage, super cheap, simple, easy to completely replace if needed. Some of the more advanced stuff, seems like stuff to keep C-suite or SecOps people impressed.
If you could be construed in any way to be a competitor of Amazon (retail namely), you'd be crazy to pick Amazon, who is in the business of stealing their customer's lunch money at every level.
You'll be surprised to learn that quite a lot of major companies running on Azure aren't Microsoft/.Net shops. Linux/Java workloads are very common. The tooling is pretty good for all that stuff too (nearing "First Class Citizen", but mayybe not quite).
I use it, am currently going through my certifications and also make a living doing it so I feel qualified to comment...
Those people moaning about price... If you have to ask the price you can't afford it. My F500 employer spends upwards of $30,000,000 a year with MS and somehow (Don't ask me, I just work here) everything goes cloud first now.
To state the obvious, the big boys pay nowhere near calculator pricing prices. The pricing is a different galaxy away. Living in a world where one small tweak can save $1,000 a day, its all about efficient planning of what your doing, ie design it right but Azure designers who are good don't come cheap.
What wasn't made obvious (and this is where a lightbulb may go on) is that MS is engaged in a hearts and minds war for Sysadmins like me. So much so that MS have a special program available to big spenders where:
1) Pretty much the entire Azure training catalog is available for free, ie AZ-401, AZ-5XX, AZ-2XX? When I say free I mean real attend in person courses with the full course content as you would normally pay $3,000 for. You can do it as many times as you like, as often as you like.
2) All personal labs are paid for with free credits (its a bit grey area this one)
2) All the exams are 100% free. All those Pearson Vue exams? 100% discounted. Did I also mention unlimited retakes?
As for the complaining about APIs and products being retired, well, the trick is to stay with the mainstream items offered.
In short I have had about $20,000 of training from MS this year alone and its not cost me one single cent. I really dislike MS but if they want to make me a very rich and in-demand person with companies who think cloud will save them from being dinosaurs, I won't bitch too loudly.
If the system is so complex it requires multiple in person training courses and exams, that honestly sounds like more of a disadvantage than an advantage. I guess from a selfish individual perspective maybe the system that lets you build a stack of certificates is better than the one that anyone can use, but that's not a recipe for long term success.
In the post he mentions that he could use a simple $3/mo Lite VPS (or vms is what he says) for what he needs.... so why go all the way to AWS if you can use something lite? Why are things like DigitalOcean / Linode / Hetzner / OVS etc so discounted? (edit: discounted in terms of opinion, dismissed so easily, not discounted in price. Obviously they are cheaper)
AWS/Azure/etc. are just not the place to get up a low cost personal server. I think the free tier confuses people into trying it, but ultimately what you get is a tiny VM with limited services that is designed for you to rapidly outgrow.
That's okay though, because that's not why the big clouds are valuable. I'm going to refer specifically to AWS, but I also include Azure, GCP, etc. AWS is not cheap. If you only need a single host, or a small fixed group of hosts, AWS is an expensive option with little to justify the added cost. What you're paying for with AWS isn't just the resources you use, but the resources AWS keeps available for use at a moment's notice. If you don't have that need to scale your resources up and down, the costs of using AWS become harder to justify.
If the goal is to do it on the cheap, a $5/mo vps is going to do better than the AWS free tier. If you need more power, the price gap between a bare metal private server and an equivalent AWS instance just gets bigger as the server gets more powerful.
Not to speak of network/power redundancy, managed hardware (like not having to prove to your ISP that their memory or disk is broken!), lots of services besides just metal...
HN is actually really in love with putting things on Hetzner, OVH and co. It works and it's cheap. But it wouldn't be a good experience when things break, and you better have really good backups and plans to fail over to another box when something bad[1] happens.
That Google DC video doesn't actually show how the data center is setup or operates. It shows all the security layers to make people feel like their data is safe (from physical attacks, not cyberattacks of course). You see more about Google Data center hard disk shredders than you do how they manage and cool their servers. The differences are more about preventing humans from getting in and access to the machines. The actual racks, cooling, etc are pretty similar in many respects. There's more redundancy in Google data centers, theoretically. But the reality is not what it might seem in terms of actual data center resilience.
> Why are things like DigitalOcean / Linode / Hetzner / OVS etc so discounted? (edit: discounted in terms of opinion, dismissed so easily, not discounted in price. Obviously they are cheaper)
Because the value-add of SaaS is often worthwhile compared to roll-your-own (or some open source equivalent) on top of a self-administered server.
A single data center is not reliable. We all know in 2022 that having a backup offsite is important. No data center can offer 100% uptime in perpetuity.
Sure but the less reliable the more those data centers will be discounted in public opinion. Which is what OP asked about. Same reason AWS us-east-1 has a horrible reputation.
Theres just no way Google/Microsoft can catch up to the AWS product, without some significant improvement in distributed systems theory. AWS has had too many years of grinding tens of thousands of engineers to get things perfect. The customer obsession, rigorous on-call, and perfectionist work culture are what make AWS. The corporate culture of MSFT/Google would never allow for the working conditions at AWS and it shows in their respective inferior products.
Yeah... Google has no experience with global distributed operations, data centers and applications at all. Neither does Microsoft, they probably don't even know what a entwork is, am I right?
/sarc
Seriously though. AWS isn't the end all, be all here and there are incredibly smart and talented network and systems engineers at many companies. The UX for AWS is often pretty bad, and it's sometimes difficult to know HOW to configure something, even if you know WHAT to configure. Like allowing a higher usage threshold for redis caching instead of trying to keep 50% of memory open by default. You can find the actual redis configuration option, how to set it in AWS, who knows (at least when I was using it). At this point DynamoDB's autoscaling is probably a much better experience, but getting it to grow/shrink appropriately was very painful at one point.
On Azure, some of the simpler services are some of the easiest to use and get started with... Data* (Tables, Queues, Blobs) in particular.
Of the three, I found Google's Cloud the most interesting to deal with.
All of that said, I don't think any of them are incapable of correcting course and making things better... but it's easy enough to let things get worse. I think Amazon's biggest problem is they now have so many competing and overlapping services, it's become harder to even know what's right. Same for Azure to an extent...
I am finding DigitalOcean's offerings to be compelling and may find myself trying that path with a project, or at least part of a project in the future.
Because learning terraform or cloud formation takes time. I am an average developer and I understand foundational infra theory. I know I can click around AWS without help in order to deploy a CRUD app. Why bother with scripted infra if I am going to revisit it once a year to fix an issue?
If you're using AWS you probably should script it for nothing else than to help other employees / new hires understand what's going on. Something like Heroku is usually a better proposition if you don't want to script anything.
People writing/operating serious software never make config changes through the UI. AWS console then is mostly just to view/investigate what is already configured
Because you have to start somewhere... if the UI doesn't work, why should I trust their console app.... I have similar views about phone apps... If your website doesn't work, WTH should I trust your app?
Also, in my original post s/DigitalOcean/Cloudflare Workers/
Can you elaborate on what you mean by "without some significant improvement in distributed systems theory"? If it's as you suggest, that it's the work culture which hold back Google and Microsoft, I don't see how improvements to distributed systems theory would result in ever passing AWS or its market share.
Probably that if there's enough of a paradigm shift in best practices for managing software infra, it would deprecate a significant portion of existing AWS and give Google or someone else a huge headway into implementing the new paradigm. A smaller scale example of this is Kubernetes, where GCP is leading adoption.
edit: AWS somewhat famously burns through engineers and ties personal compensation and career development to making things happen even if it requires insane workloads.
I tried Azure 1 year ago and I was really shocked to see failures to deploy basic things like VMs or AKS, failures that required to contact support to get fixed. And support took several days to answer... WOW
I joined an organization a couple of years ago who were already deeply in the Azure ecosystem. At some point we needed an instance with high disk performance and Azure basically told us to piss off. Spent almost two months trying to convince them that we were worthy of paying for this instance type, and nope. Piss off.
We ended up taking our entire dev, staging, and production infrastructure to AWS and GCP.
Azure is really big outside the US and the EU. In LATAM, they practically have a monopoly, so much so that governments use them exclusively. They have a thight grip on that front and I'm willing to bet that shady things are happening to a lower extent.
Heck, country managers from a big European company were conducting all sort of dealings with local companies under the table. They were discovered and of course they were cancelled but by that time they were rich and didn't care.
As someone who helps deliver a free tier to customers at my work, I can sympathize with Microsoft’s pains here. It sucks the author had such a bad experience, but fraudsters are scammy bastards who are ruining “free compute” for everyone.
> fraudsters are scammy bastards who are ruining “free compute” for everyone
Does that mean there are people out there who'd sign up for a free trial but who have no intention of paying for any product, ever? Wow*
(This is a genuine question) in the spirit of scientific enquiry, that would appear to be far more of a problem with your marketing department and their assumptions about converting trials to paid, rather than with the people who have no intention of paying for your product?
The fraudsters are really like 50x more of a problem than what you're talking about in practice. Even if you _think_ you're the sort who will never go beyond free tier, circle back in 3 years and you've probably forced your employer or someone you consult for to start paying for stuff even if you still use free tier for your personal stuff. I call it the "Adobe effect" -- a reference to how in the early 2010s some Adobe insiders famously released some of their own pirated photoshop torrents, with, rumor has it, company approval, because they realized the kids pirating photoshop would convert to sales ~5 years later when they work for whatever company. Free converts very well even if it is in surprising / indirect ways.
The fraudsters and miners, however, are a whole other thing. They will suck up the maximum amount of resources they can and extract a tiny profit from it. It will inevitably cost you, the cloud provider, money, because mining typically isn't profitable without GPU compute anyway, so the entire profitiably of what they're doing is based on the fact that you are offering free electricity. It's a net loss, as these users will also probably never convert proportional to the usage they incur. Most users of free tier services use a tiny fraction of the available free resources within the tier. These guys will hover at 99.99% always, ruining the whole profitability model and dramatically increasing the cost of acquriing a customer via free tier.
A lot of cloud services (not just Azure, but certainly in Azure) are assigned a globally unique resource name, which also gets assigned a DNS name.
Domain squatters can walk the domain to find these services and keep an eye for if they lapse. (Sometimes this happens if a service needs to be recreated due to upgrade)
What a malicious user can do during that maintenance is trigger a deploy of the same service name now that it's available and snap it up. They can either shut down the VM or scale down the service to be sitting there and not costing them anything, and see if they can extract payment from their victim to release the domain (which might be hard coded somewhere). Worse yet, they could leave it as is and try and see what interesting traffic starts coming in and tinker with it.
No. We love customers who don’t need to grow beyond the free tier. What I mean is people exploiting free compute for crypto mining, phishing, botnets, cp, hacking, etc. The efforts/systems necessary, and business decisions around the free-tiers businesses have are directly impacted by the time-space spent on keeping things safe from fraudsters. It’s wild.
Same as Microsoft's tradition of tolerating complete piracy in the 90's education segment: hook them up young, then when they get to the corporate world they'll pay up naturally !
Last I checked AWS has poor support for Docker and Kubernetes, they charge a bit more for those than their own Fargate. Docker & Kubernetes are a threat to AWS because it makes applications cloud-independent, and AWS would prefer for you to weld your applications to AWS.
The issues mentioned in this story are minor irritants that exist in Azure, AWS and Google cloud.
AWS support for Docker and Kubernetes is great, I'm curious why you say that.
Managed Kubernetes (EKS) is really good at this point, and coupled with the ALB Ingress Controller, it beats anything else I've tried in terms of monitoring, ingress routing, etc. One little bit that is missing from EKS is selecting the size of your control plane, because the default is too small for dynamically scaling clusters running thousands of pods - you need to go through support to increase it.
Docker is more a function of the OS you choose on your instances, and it work as well on Amazon Linux as in any other. In k8s, docker is no longer assumed to be the only CNI, so lots of changes are in flight to be able to change that.
FWIW, my team is spending $400k/month in AWS right now. Suffice it to say, we use it a lot. It's been generally good, though it's hard to figure out new things due to terrible docs. Docs are plentiful, but not well integrated with each other and not kept up to date.
+1 on EKS. It's not perfect but we've had an excellent experience building a database-as-a-service on top of it. We run well over 100 clusters and are growing rapidly. Best thing is that we didn't have to get enmeshed with K8s operations and instead focused on the applications on top.
I don't want to assume things, but in many orgs this just means "we waste 10x more than we could have without it, and equally more on multi-team developer/management workforce needed to support it, because of course everything needs scale".
k8s doesn't get you scaling, in fact, it can get in the way, but if you play your cards right, it gets you high utilization of the CPU and RAM for which you pay, more than doing something simpler via autoscaling or whatever, because it's a solution to dynamic workload distribution, which is a difficult thing to build yourself.
Oh, after trying my hand at a custom scheduler on bare-bones VMs, I dreamed of using K8s for bin-packing too, but in reality they still end up just running a bunch of Java services that are extremely poor at RAM reporting, and so really are not compatible with the sexy VPA and other such cool stuff.
So the end result is still just a bunch of nodes running with way too much "free" RAM (and hence CPU) to accommodate for poorly predictable JRE RAM consumption patterns. I once observed that what could run on a single node with 16 cores (according to GKE's own cost reporting metrics) ends up running on a cluster of 512 (!) vCPUs. So it's more than a 10x waste, just purely to avoid running OOM etc.
No, the k8s "kubelet", which runs on each worker node, doesn't use much RAM. Java services have notoriously spiky memory usage, so you end up provisioning much more RAM than you need in the average case to be able to support the spikes, so you underutilize RAM in the average case. The previous poster is describing an issue where they overprovisioned RAM heavily, and so, ended up using a lot of nodes due to the way memory requests and limits are managed in k8s.
You fix this by enabling swap and and allocating pods to nodes based on their common memory usage, and accept that your worker node will slow down when some Java process wants all the RAM.
Tbh, k8s "system" namespaces also consume quite a bit (particulary if you wanna run a minimal system) - at least 0.5 vCPU on each node and something like 0.5-1GB of RAM. This is only important for the smallest systems, but still is a hindrance to K8s adoption for such projects.
This just isn’t true. EKS is well supported now and works quite well. Amazon originally bet on ECS years and years ago before it was clear k8s was going to win, but they’ve committed to EKS now.
Re docker - ECS, and fargate(fargate supports EKS/k8s now too) which is built on top of it, have always been docker from day 1. So don’t understand what you mean there.
Care to... add more to the conversation? I don't even know which part you are saying isn't true. Or why it isn't true. Or what the truth is, since you seem to know?
Edit: A reply of just "yes"... Is this some weird form of trolling?
In general I'm seeing folks move from managed services to Kubernetes.
The common sentiment is that AWS managed services are complex anyway (except for very simple use cases), so you might as well go to K8s and get more flexibility and potentially reduced costs.
Thanks to OP for kicking off the discussion, and to all for the passion on this topic. The feedback has been clear that we need to do a better job of identifying the regions available for our trial offerings. To that end, we have made a change to the Azure portal for our trial users which will now take you directly to the regions that have availability for free tier images.
For those that wish to upgrade to our paid offerings, we have also made improvements to how we handle the legally-required validation steps.
We appreciate the interest in Azure, and hope that folks will continue to give our offerings a trial. And if anyone experience issues at any stage of their journey with Azure we would welcome the opportunity to connect to learn how we can help.
I recently read a tweet along the lines of “Azure’s interfaces are designed for an accountant, not an engineer”, and it explains it.
Azure sucks. The terrible UX is just the tip of the iceberg. Things often just… don’t work. And good luck getting someone to talk to about it. The security at Azure is a total shitshow too, don’t forget.
I also have had extremely poor experiences with Azure, and recommend anyone who needs reliable infrastructure to avoid this POS.
I was called back to work (online in my hotel room) from a holiday on the beach in Thailand, by an apparently intractable Azure problem.
Expecting some technical subtlety, it turns out it was Azure cancelling my employers account and refusing to explain why, with no appeal or recall. Then the "support staff" issuing this decree cheerfully signing off with a Have A Nice Day style boilerplate, after destroying my organisations PowerBI project and refusing to explain why.
Screw you microsoft, avoid this garbage like the plague.
I bet the stress and time lost from crap like this is never accounted for in Microsofts idiotic TCO propaganda crap
I am convinced that every decision behind Azure goes like this:
"Did AWS do it like that? Then no."
"But it makes everything we do worse if we do it differently."
"That's fine."
I can create a fully functional website in 1 minute using Azure Web Apps, but it will take me 2 days to configure millions of options for AWS Beanstalk... So definitely Azure is better, because AWS is extremely overengineered and needs a lot of administration overhead.
Kind of agree on that, but I almost feel the opposite with AWS Lambda vs Azure Functions... The advantage of functions, is of course direct access options over http.
It is amusing that the replies/suggestions to your post recommend multiple different AWS solutions instead of Beanstalk. Definitely confirms the problem of AWS having a confusing amount of overlapping products
Microsoft’s entire subscription process is a mess, it’s not a surprise that Azure follows suite. Just try and start a visual studio pro subscription, it will take you well over 30 min if you are lucky. On the corresponding side, open an AWS account and you can spin up an instance and create some workmail accounts in minutes.
It seems that on the last screengrab [0] it may have been that the payment system is loaded in an iframe right in the white overlay and that your ad/script blocker stopped that. But overall, that's one terrible experience and just false advertising if you ask me?
As to the ad-blocker on the payment verification page, it probably just wants to run some third-party JS for fingerprinting, to prevent the free tier being abused. If you can figure out which domains want to set cookies, you can whitelist them in your ad-blocker.
A lot of sites are broken with ad blockers. It's usually even not because of ads. It's because of tracking JS, which gets called from everywhere (because we need metrics to track user behavior!) but the code around it assumes metrics never fail, so when metrics script gets cut by adblocker, everything fails. It could be pretty easy to avoid this, but nobody approves a budget to refactor the site to work with adblockers, and I am pretty sure nobody tests with adblockers. Also, as a bonus, if you break adblockers, people would disable them and give us our metrics we need to run A/B tests and generate nice reports, so it's a win-win, isn't it?
Yeah... most large companies definitely need more testers running common adblockers like ublock or pihole with the default list.
Of course, worse still is the clear misses on WCAG for visibly impaired... or people like me who max out text and content size on their phones... really pisses me off to no end when a modal + buttons are off screen with no ability to zoom or scroll the modal dialog. First two things I check when trying a new ui framework, is how does that modal interaction work, and what does the calendar/time interface look like.
Their support is easily the best among all SaaS and FaaS vendors I've used. Can't stress this enough, it's the main reason I would never go to Azure or GCP. I've had many maddening experiences with Google's and Microsoft's support.
I use AWS professionally and sometimes can't believe how complex it is. To me it seems to be made for complex architectures in very large enterprises. It maps the organizational complexity. For the individual trying and building applications that seems way over the top. So for my personal stuff I use Digital Ocean. I think their App platform is awesome. Just connect it to your repo and every time you merge a feature branch into main it gets automatically build and deployed.
In my experience, Azure is a must for .NET applications. Everything is just so well integrated together. You can setup a web/api that's deployed via Github Actions with just a couple clicks. Same goes for webapps build with React/Angular/etc.
You can include a managed SQL server instance; or go for the unmatched CosmosDB. The Storage is also pretty good and almost free.
I just don't see this level of integration in other cloud providers.
Even with .Net ... on small to even medium-ish scale, you can deploy a couple Dokku nodes, deploy to all three, set them behind a load balancer and use the hosted PostgreSQL + replica. This can handle a lot of load with minimal startup cost and scale relatively easily on almost any mid-tier provider at a lot less cost.
You can still deploy with Github actions, still have redundancy on the front end and pay a couple hundred a month vs. many hundreds or thousands.
To be fair, all of the clouds are poor for what they consider "unimportant" users, such as free-tier, low-use, non-prod, etc... They all target big enterprise, because that's where the profit is.
However, unlike AWS, Azure is poor at "big enterprise" as well, which is rather shocking for Microsoft. Their strength has always been that they "know" enterprise and tick all the right checkboxes to make their big fleets of systems work.
Not in Azure!
Several people have pointed out that they suggest rewriting applications to be compatible with their bugs instead of simply fixing the bugs.
I call it the "you" problem. The marketing, documentation, and support staff will all use that word like it means something.
I'm an external consultant for an enterprise org with over 15,000 staff, 300 of which are in IT, not counting several hundred external IT contractors, vendors, service providers, etc...
If I call Azure support to report a bug, they will cheerfully say:
"Please use X instead of Y in your app, then it will work."
To which I want to find a polite way to say: "What the fuck are you talking about? What do you mean 'your' app!? I got this thing dumped in my lap! It's a lift and shift of 10-year-old abandonware! Your marketing assured some other manager here that it would 'just' work! I'm a subcontractor to a contractor with a contract to move this thing to Azure that took 6 months to draft and 3 months to sign, with Microsoft reps involved at every step! There's no going back now. Make. Your. Shit. Work."
For people that downvote random salty anger with no substance, here's one random example:
Azure SQL has literally zero support for database-level locale or time zone options.
None.
Zero. Zip. Nada.
It's permanently set to use US English and UTC time zone. There is no recourse. There's no setting. There is no way to fix this.
The Azure SQL Database Migration compatibility verification tool will not warn you if you use 'legacy' functions like GETDATE(), which assume that the server time zone is set correctly.
The documentation mentions this in passing once, on a page you will only ever see if googling this in a panic at 4:55pm on a Friday after a week of live usage in production... which is when users will finally notice that "sometimes" dates appear to be wrong.
So now your data has had local and UTC datetimes blended together, essentially shredding your data. Too late for rollback. No global setting to fix, no vendor support to update the code, and nobody picked this up in UAT because a few hours of offset is just small enough not to be detected in most cases unless the user happens to enter a record at just the right time (before 10am in our case).
I called Azure Support and they told me that yes, it's broken by design. Azure SQL is designed for use in our future space colonies, where the time zone is UTC, not for the legacy planet-dwellers that failed to "move on".
This was an unmitigated disaster for us, and a lot of sleepless nights for me.
What does AWS do with their RDS for SQL offering? Do they support time zones?
That's because their programmers are not... well... err... there's just no polite way to say what I think of Azure SQL's development team right now.
Let me put it this way: Microsoft's sales reps have convinced my customers to lift and shift hundreds of individual legacy databases to Azure SQL, and of those I'm betting 90% are quietly shredding their date-related data, causing random glitchy problems.
This is just one example of many like this. It's not that every cloud does it worse. Without exception, what Azure does is always clown shoes compared to what everyone else does, and it's not even possible to convince them to fix it, because they always turn around and say: "No sir, you are wrong, it is designed to be broken!"
Azure SQL has literally zero support for database-level locale or time zone options.
This isn't true, you can set the database collation when creating a database. Or are you referring specifically to the language of SQL informational responses (such as error messages)?
There is no recourse. There's no setting. There is no way to fix this.
> Azure SQL Database (with the exception of Azure SQL Managed Instance) and Azure Synapse Analytics follow UTC. Use AT TIME ZONE in Azure SQL Database or Azure Synapse Analytics if you need to interpret date and time information in a non-UTC time zone.
Note the addition with the exception of Azure SQL Managed Instance: if you need server-level control over the time zone, you should use a Managed Instance.
As an aside, this problem with datetime fields depending on the server's timezone isn't just a problem in Azure. We've had countless joys with a naive on-prem CRM application storing birthdays in a datetime-field with 00:00:00 as the time part. When we switched to daylight savings time, all those people were suddenly born at 23:00 the day before. Try explaining that one to your users, who (of course) only see the date part in their UI.
Point #1: Collations and time zones have nothing to do with each other. Collation affects only sort order and equality comparisons of strings, and has zero effect on dates. If you're going to give advice and quote documentation articles, you should know this already.
Point #2: The word developer documentation is an implicit "you". It's targetted at "the developer", who is not me. I'm not even a DBA! There is a whole team of DBAs that are not me ("you"). There are several dev teams for the suite applications hanging off this database, none of whom are me ("you").[1]
Did you notice how it straight up lies and says that GETDATE() and GETUTCDATE() return different values? Developers go to this page, see this, and assume "All is well", and then I get to spend 4 days of 12-hours of emergency scripting to fix this up.
Point #4: I love how I go on a rant about the mis-use of the word "you" in an Enterprise setting, and then your reply is "if you need to interpret date". Who is this mythical "you" person that is in charge of everything everywhere for all time in an 15K user enterprise!? I don't get to decide anything. I don't write the queries. Literally hundreds of people do, only half of which even work in this place! There are third-party report tools, integrations, import/export utilities, ETL, you name it. They all assume that local time is local time, not UTC.
Point #5: "Note the addition with the exception of Azure SQL Managed Instance: if you need server-level control over the time zone, you should use a Managed Instance." -- fantastic suggestion, why didn't I think of that? Oh I did, and discovered that Az SQL MI doesn't support zone-redundancy, which makes it a no-go for many Enterprise applications with strict uptime requirements. In this case it was absolutely rejected by the project steering committee (people not me).
Stop apologising for Azure's bad decisions.
AWS does this correctly. There is no excuse.
In 2022, if I want a zone-redundant PaaS offering for "Microsoft SQL", the only option is a non-Microsoft company: Amazon Web Services.
If you go with Azure, your data will be shredded the second three decades of muscle memory kicks in for some random developer OR you have to abandon your high availability requirements.
What kind of choice is that!?
Narrator: the type that makes customers stop recommending Azure and start recommending AWS.
[1] I went to the lengths of writing a Log Analytics alert to trigger on any use of the 'GETDATE()' function after having searched & replaced all uses of it with the workaround.[2] It gets triggered regularly. This is reality. Microsoft would like to simply pretend reality doesn't exist.
[2] The workaround on that page is wrong, it uses a fixed offset and is not daylight savings aware. I mean.. just.. oh my god how do you not see how bad this all is!?
The real workaround is something like this:
CAST(SYSDATETIMEOFFSET() AT TIME ZONE 'AUS Eastern Standard Time' AS datetime)
(I'm putting this here because random blogs are recommending using a scalar function, but that kills the performance in many common scenarios.)
I could picture you taking a deep breath before you wrote what came afterwards. I just wanna hug you and cry man! I've been through so much shit because of Azure's weird behaviour at work. Ruined weekends, Late night troubleshooting cos Azure is weirdly broken somewhere and this is not just isolated to one service. I'm talking about databases to app services to AKS to Azure Functions, Blob Triggers what the fuck not. I've had to do weird workarounds for issues which should not even be issues in the first place. Most of the production issues I've had with a fairly fairly complicated clusterfuck of a project was mostly Azure related issues. I don't even know what to think anymore!
> I went to the lengths of writing a Log Analytics alert to trigger
I've done this lol. Not for the date issue but something else.
As a reformed CTO of a rather large bank, your two posts above make me happy.
Not that you faced this. For that, just regretful empathy.
Rather, happy to recognize the “there is no excuse” attitude, the concrete and informed feedback, and the determination to keep digging into the problem till coming up with specific workarounds.
Neither the vendor nor your company understand what folks like this are worth.
Always use UTC when storing or crossing the wire with dates. I've been setting all servers, cloud or local to UTC when I have that option for about a decade and a half... I nearly always store dates as UTC unless there's a location attached, such as for a localized event... Doing anything else leads to heartache, despair and weird bugs.
Trying to read TZ1, convert to TZN then send to a user vs. storing as UTC, reading as UTC, and letting the client convert to/from local (far easier than other options.
This only works for new applications or systems that will never have older applications connected to them. Databases in particular tend to have a plethora of "random stuff" talking to them and sending ad-hoc queries.
Microsoft's sales reps have been running around like they're on coke telling everyone who's willing to listen to them to lift & shift legacy apps to the cloud.
They have KPIs around migrations to PaaS services, so they always recommend moving applications to App Service and Azure SQL first.
Customers listen, migrate their apps, and get burned like we did.
Their PaaS services are crazy over-priced and have poor performance, which makes their value proposition a no-go for most newly developed apps. At the same time, they're not suitable for Enterprise users migrating old apps either.
Can't speak to the planning that did or didn't happen on this. That also includes and risk assessment... I would look to whoever did the risk assessment for not understanding what was happening well enough.
That said, whenever I've touched a legacy app for any forward migrations... I've learned that date/time handling is usually one of the first things you should confirm. Just my own take on this.
This happened to me too, and I gave up on Azure. A few weeks later I got an email from some consultant Microsoft supposedly hired to onboard new customers. I ignored a few emails, but eventually I just sent back a very brief rant about this experience. I never heard from them again.
I use both today. Not a big fan of either. Azure is better if you are really deep into Microsoft tech. I'd lean towards Azure because of our tech stack. If we were not a Microsoft shop, I'd be heavily in favor of AWS.
My problem when I still used Azure ca. three years ago was being nickeled and dimed. The intro tier of Postgres database does not support use on VPC/private networks, requiring cumbersome and less secure node-locking.
If you are in .NET land then Azure is the better option; having used both it's just a ton easier. The Publish integration with Visual Studio, even the free edition, is really nice. You basically download the Publish profile from Azure, import it and that's it. Once you've set that up you can right-click>Publish at will. It's super nice and while I have no doubt you could achieve the same thing on AWS, but you'd have to figure out and setup a ton of things yourself.
This is fine for hacking a POC together, but for anything in production I'd want to go the immutable infra route deployed from a CD server, so the visual studio integration doesnt really help.
At a high level I'd say, Azure is more coherent and seems to have a clearer technical vision. They achieve this by moving slower and being more willing to deprecate services/apis which dont meet that standard. AWS move quicker, they offer leading edge tech first, which means that it is often more battle tested and reliable, but they dont benefit from having the opportunity to learn from others' mistakes, which means they tend to expose more warts and inconsistencies to end users.
Exactly. I was reading OP post and, while I can see why it sounds appealing, reminded me too much of using Filezilla to deploy some files to production.
Nah, after a few decades of development you drop the pretense and go with what's easiest / lease friction. And if you can't see why going with Azure when you're a full .NET guy, then I don't know what else to say.
I was surprised to see so little about Oracle Cloud on this thread. I also use their free VMs for personal stuff and for that they are amazing. For actual business needs I would never trust Oracle in the slightest but their free tier is the best among all providers I've found.
In the end I ended up signing up with a paid account for Oracle so I could host my domain's DNS on their cloud thingamajig, but it's the only paid service I use while the VMs are free. For the last months Oracle has been billing my card 0.01 EUR/month for the DNS which I find hilarious as it's probably less than their transaction fee. Also means I can file support tickets if I need to, which I think the free tier lacked.
Wait, so all that text just for telling us that a company - driven by capitalism - is not giving away cloud compute power for free? What a surprise! And AWS is better in that regard because...?
Sometimes I wonder how articles like these can make it to the hot page.
Getting quota with Azure is just so hard - it takes literally months, and they force you onto invoice billing. I had to build out on other platforms while waiting for trivial requests to get approved.
My main problem with Azure is all the artificial restrictions they have between 'SKUs'. In GCP and AWS you can, for the most part, mix and match components. AWS announces a new EBS volume type, you can be pretty much assured it will work with any instance type there is. Maybe some are more optimized than others, but if you want to attach them, they will work.
Not so much on Azure. Need to match your 'premium' instance with your 'premium' storage. And even when you use the correct SKUs(and correct instance types and whatnot), there are many limitations. For example:
"Premium SSD v2 limitations
Premium SSD v2 disks can't be used as an OS disk.
Currently, Premium SSD v2 disks can only be attached to zonal VMs.
Currently, Premium SSD v2 disks can't be attached to VMs in virtual machine scale sets.
Currently, taking snapshots aren't supported, and you can't create a Premium SSD v2 from the snapshot of another disk type.
Currently, Premium SSD v2 disks can't be attached to VMs with encryption at host enabled.
Currently, Premium SSD v2 disks can't be attached to VMs in Availability Sets.
Azure Disk Encryption isn't supported for VMs with Premium SSD v2 disks.
Azure Backup and Azure Site Recovery aren't supported for VMs with Premium SSD v2 disks.
"
AWS has very few artificial limitations. New disk type launched? Want to use as part of an Auto-Scaling group? Knock yourself out. Want that as a boot disk? Sure. Encryption? Available.
And that's only an example on the storage side. Compute, storage and networking are the bread and butter, and one of the very first services you would expect in the cloud today - before all the abstractions are added on top.
Many of those limitations are not very well documented. You end up learning about then when your Terraform code runs (assuming Azure APIs didn't throw cryptic 400 errors).
Then there's all the hidden surprises. I'm pretty much used to attaching a load balancer (any type) to any instance with no issues whatsoever – on AWS. Imagine my surprise when I added a private load balancer to an Azure instance and it lost internet connectivity! I understand all the SNAT B.S. now, but it feels like a very thin layer over a network appliance. And not what you expect in the cloud.
Maybe the more specialized Azure services (Active Directory, AKS) are a better value proposition. But compute, networking and storage? Go AWS or, even better, GCP. GCP will give you seamless VM migrations(none of this scheduled maintenance stuff), easy access to anycast load balancers. AWS will give you a myriad instance types and storage options, and decent networking. All of them except Azure will give you multiple availability zones everywhere.
Services in preview are just that, preview, and may not be fully featured yet. I would expect many/most of these limitations to be removed by the time the service reaches GA (just speculating, I don't have any specific knowledge of the service)
AWS does have a couple of annoying un-configurable limitations:
* There is a Global Accelerator idle timeout of 340 seconds for TCP connections
* And a Global Accelerator idle timeout of 30 seconds for UDP flows.
These timeouts are not customer or support-configurable.
Many customers have requested configurable Global Accelerator timeouts and AWS at least has a feature request open for it, but we have no way of knowing how or if they're prioritizing that.
Azure is the obvious choice for the enterprise. Azure App Service is dead simple to deploy to and manage for enterprise monolithic application where elasticity really isn't a requirement. If you stay within the ecosystem of AD+Azure DevOps+App Service it's a no-brainer. For anything else, AWS.
* Azure APIs, tools and services get deprecated often. As soon as 3rd party docs get good and the bad bugs get fixed it will get deprecated. AWS has its share of v2 APIs, but most of the fundamental services are the same for more than 10 years. The new services build on top of the old ones, instead of replacing them.
* MS reps will endlessly spam you and every colleague in your org to adopt their latest preview features (gasp! You are not using ML and AKS??); But if you end up trying them you will find that they are half baked and if you get any sort of incident they will ask you to rewrite your app to accommodate them, rather than fixing their own bugs/state.
* AWS has great 1st party documentation, and the stability means great 3rd party documentation gets created as well. Azure gets astroturfed "community" docs written by Microsoft employees. Community means that support and official pages link to them, yet Microsoft has no obligation to keep them up to date or take any responsibility of their contents.
* I admit I like Azure resource groups (a logical container of resources in an account, required for every resource and simpler than the AWS equivalent). However I don't really miss them when I follow best practices (billing alerts, separate accounts for environments, IAC, tags).