I think it's a bit strange to reply directly that you did not plagiarise any images when the opening table cell background is a direct lift of the SVG from our site...
As a neutral observer, it’s hard to make sense of what exactly you’re saying was copied.
What is the opening table cell background? Is this something in the app code itself, or something that made it onto the public site? And this SVG was custom made by your company?
The asset was created in-house by our design team custom for our website (not outsourced or a template) and was copied identically. The asset itself is a small thing, but the denial of something which is materially provable seemed very odd to me, hence my reply!
Thanks for sharing the details. They got to the image before I got to this thread, so this is super helpful.
And that’s pretty appalling. As a product manager who had to be aware of what our competitors were doing, I can’t even begin to understand how someone thought directly lifting assets was a good idea.
For legal reasons at minimum, but for ethical reasons as well.
We always made a point of not letting our design people even see competitor’s stuff for exactly this reason.
Native bindings can use libuv to offboard work to other threads and then re trigger the JS execution when it completes.
This is how a lot of the native node libraries work under the hood to allow parallel IO operations! It’s also an incredibly powerful performance optimisation tool in more complex, scale out NodeJS deployments.
Python bindings for native libraries would still be the same, they are the same native libraries, while pure Python code would speed up thanks to JavaScript runtimes having JITs.
At work we do some pretty significant parallelised computation (using Node and Rust) and that has equivalent performance to a relevant python based library.
There are nuances to how data enters and exits the VM runtime which means I couldn’t say with confidence if performance would be better or worse in the general case - but either direction is at least technically viable.
I think this is definitely the primary challenge with building great, modern enterprise software. You need the flexibility to meet the various demands of differently shaped businesses but the trade off is complexity.
I personally think that we've come a long way with no code tooling and UI these days which makes these problems easier to tackle well - but it's a constant battle.
The comments here have mainly focused on the issue of instant suspension - which is obviously deeply concerning - but I also feel like there is a huge issue at Cloudflare regarding their Enterprise pricing model.
Cloudflare's sales team and Enterprise pricing model are one of the least effective sales organisations I have encountered in this space. Given the technical nature of their product, it's extremely hard to explain even basic uses of the tool and things like Workers are near impossible to discuss with them. I was really unsurprised to see that OP had a failed Enterprise negotiation with them as I have had the exact same conversation at three different companies now and can imagine perfectly what you were told.
The current offerings of Enterprise and Enterprise Lite simply do not map to the reality of how people use the tool and scale businesses on top of it. I think in part due to Cloudflare's history essentially selling bandwidth and caching, the model is fixated on high binary traffic workloads and simply cannot comprehend the SaaS service model that runs on it and tools like Workers.
This is mostly a rant and hopefully a small +1 signal that this area needs major improvement - but I would also love to hear if anyone else has had interactions with Cloudflare Enterprise and how they found that process?
(Disclaimer: I'm a massive fan of Cloudflare, a user of their products and hold their stock)
I'm also a massive fan of Cloudflare in general, love their Workers and related products, just that one aspect of account suspension without warning could be improved a little bit :)
> Cloudflare's sales team and Enterprise pricing model are one of the least effective sales organisations I have encountered in this space.
I have seen this everywhere. Any large software company seems to operate with 2 completely different heads when it comes to technical sales support.
The "best" experience I've had was with GitHub Enterprise sales, but mostly because they just gave me access to the docs/binaries without much frustration. If I had a bunch of questions about the technology vs cost vs how we actually want use their product, it would have been a substantial nightmare.
I've had the exact opposite experience with GitHub enterprise sales. It took 3 months to get them to add a new user block to our existing sub.
Sometimes I think it's amazing they're able to even generate revenue as poor as that experience was. It's a shame we like the product (mostly) so much...
I've been in a sales call with a German CF representative, and it just seemed to be a third party being excited about the features using the demo account half for their hobby and half for demonstrations with almost every feature set up and demoable. They even hosted their own toy AS on Cloudflare Magic Transit.
We moved from Heroku to GCP after approximately two years of using Heroku. (This was three years ago so some information may have changed)
The move worked out incredibly smoothly and has saved us money and allowed us to "modernise" our infrastructure to take advantage of some of the newer trends in Infrastructure and Security.
To address your direct questions:
1. Not very long. We were running a NodeJS app with a web layer and several background workers. We were able to get this running on a Google Compute Engine VM in about 1 day using Packer. The whole migration process took about two weeks start to finish.
2. Our team is relatively experienced and had experience with all three major platforms and Kubernetes (although we chose not to use Kube in this case). We are definitely a team of developers, not sysadmins though. This means we had to learn some new things particularly about tuning NodeJS apps on raw linux.
3. I don't think we learnt too much (other than the undocumented rough edges of both platforms) but it was definitely worth it for financial and quality reasons.
4. It's a relatively hard metric to calculate when the company is growing user base and features quickly - but I would estimate it at around 50%.
5. 1 app with around 5000 requests per second. NodeJS / Typescript / Rust
6. If you have only ever used Heroku I think it would be worth getting comfortable with Containers (Docker basically) and making your app run in a container. From there you have tools like Railway (https://railway.app) or Cloud66 (https://www.cloud66.com) that can do most of the rest for you.
I‘m curious about your reasons to decide against Kubernetes and instead opt for plain VMs + Packer, especially when already familiar with the platform.
We rely heavily on GCP and, whilst nothing is impossible, it would be incredibly painful for us to transition away.
I personally think it’s quite unlikely that they shutter the entire business and all the product lines - many of them are extremely high margin and are already in a stable position. Given how many of their products they have designated as Google Cloud (last I heard Workspace is being attributed to Cloud revenue now) it seems more likely we would see the killing of new or in-development product lines such as Anthos or Alloy DB whereas the more established and profitable areas such as Workspace, BigQuery, Looker, GKE, BigTable and others would be allowed to continue to exist in some form.
As someone who has used all the major Cloud Providers, I have generally found the Google products to be the best engineered with “the right sized nut” normally being available, rather than an exercise in cryptography to decipher the documentation for a workaround. It would be a colossal waste and shame IMO to throw that away.
As a customer, I think they are struggling (as the article concurs) with their approach to Sales, Marketing and Support. It is frankly years behind their competition and recently (~2 years) have seen an increase in effort but not real shift in the end result. Part of this problem is their insistence on using resellers and partners for any interesting size deal where these partners themselves are really not up to scratch.
For now, we won’t be taking action to de-risk our position beyond ensuring that a migration of some kind is technically feasible given sufficient notice. I think they will start to drop the investment into the platform and focus on extracting revenue from what they’ve got - but I don’t see it going the way of Stadia and siblings from the consumer side. It’s my hope that revenue (and the narrative it allows them to tell the market around diversification) is worth keeping their crown jewel products running.
I’ve watched Google’s behaviour towards its products for the 10 years cited and referenced it in my post above.
Reviewing the infamous https://killedbygoogle.com/ I don’t see much to draw the conclusion that Google kills enterprise products. Which products do you use to draw this inference or are you simply saying you believe the mentality they use for their consumer products would be used for their cloud products?
In either case, I do not think it’s impossible they kill GCP. Just that it is extremely unlikely and without historical precedent w.r.t their cloud business.
> without historical precedent w.r.t their cloud business
That’s not hard, considering they’ve existed for only a few years. In that time they’ve done plenty of things that make me feel like their behavior is questionable (the 10x price hike for maps API comes to mind)
Even if Google keeps GCP operating indefinitely, I feel like there's always the shadow looming of account termination/suspension with no warning or explanation. Do you have a strategy for if this happens?
As a business customer, we have a dedicated account manager and customer support engineer who in the worst instance any member of our incident response team could ring directly.
We also work with a Google Cloud Partner that we maintain out of band communication with who would be able to mediate account restoration.
All in all, I do not think a business account faces this risk at all and it’s largely a narrative developed from their consumer business where humans are hard to reach.
We’re currently migrating to Spanner for a variety of reasons - but the mandatory downtime on their Postgres CloudSQL offering will be the part I miss the least.
It’s insane that even with all of their HA and failover turned on they take the whole cluster down for as long as they like every few months!
Surely product makes these decisions, not engineers, right? I agree that customer empathy is important, but I don't think we can conclude that the engineering team (rather than the product team) is the source of the deficiency?
> Surely product makes these decisions, not engineers, right? I agree that customer empathy is important, but I don't think we can conclude that the engineering team (rather than the product team) is the source of the deficiency?
I haven't worked inside AWS or GCP, but I've never seen product get everything they want, especially around maintenance/downtime. If "less downtime" is on the roadmap but engineering is constantly pushing back "that'll be really really hard and take a long time and they're just using it wrong anyway," I can't imagine it getting done as quickly as at a place where the engineering team was also focused on customer satisfaction.
> that'll be really really hard and take a long time
It probably is hard and intensive. Engineering shouldn't lie and promise that it will be easier. Product has to take that engineering estimate and determine whether to work uptime or some sexy feature (and sexy features usually win because of perverse incentives).
Moreover, I have a hard time believing this for a couple reasons: first of all, I've scarcely met engineers who were opposed to improving product reliability, maintainability, etc. The portrait of Google engineers arguing that database services fundamentally shouldn't be HA (and customers are "using it wrong" for wanting HA DBs) is particularly incredulous. Secondly, I've never heard of an organization where engineering held political power over product decisions, but I have worked in several places where product dictated engineering solutions. Businesses trust product more readily than engineering because the things that engineering is always petitioning for are abstract and "costly" (deferring some immediate profit for reduced costs in the long run) while the things product wants are usually tangible and profitable.
Sounds familiar.
Product and sales areas of the business look at immediate revenues - and hopefully profit - but are always more keen to do that at the cost of increasing technical debt within the engineering side of the business. Unless sales see a real impact to technical debt, they will always choose the short-term approach.
Agreed, and I don't think that's even necessarily the worst thing. It just means that engineering and product/sales have to have a conversation, mutual trust, and a shared vision that extends beyond the next quarter. These are hard things to cultivate, however.
I agree. In whichever case, blaming individual engineers seems weird. Rarely are individual engineers to blame, and even when they are the organization should be robust enough to route around the occasional deficient engineer.
It's a full-company culture problem where ~everyone has personal responsibility.
Behavioral properties are end-to-end, like speed, security, customer success, uptime, etc. Everyone and everything on the hotpath for that has to be on board, as just one violator means nobody is achieving it. Operationally, part of achieving that means company norms, such as when collaborating, planning, executing, etc.
It's 100% fine not to provide any of these, and is even reasonable given it's hard to change one's DNA end-to-end... but better to be an explicit decision. Likewise, if starting a company... a lot easier to pick & ingrain these habits ahead of time.
I tried explaining this to some Azure product teams, and they gave me a blank stare in return.
Sure, you can have zone-redundant Azure SQL Databases, but not Azure App Service!
You can have zonal Azure App Service, but with a different network model than the default, so it's a breaking change for some apps.
It's as if nobody has actually sat down at Microsoft to build something similar to what their customers are building. It's all just tech demos, "get your blog hosted in 5 easy steps", and crap like that. Nobody at Microsoft has ever built anything of substance with the majority of their own platform.
It was the same story with Windows Presentation Foundation (WPF). It was hot garbage when it was first released, but Microsoft kept telling all their partners to prefer it over the legacy GDI+. People tried and failed to write GUIs in it. When after many years Microsoft finally tried to convert Visual Studio to WPF -- the first time they had used their own framework for one of their own apps -- magically the core issues were fixed and WPF become viable for real apps.
My experience with MSFT is that the account reps are clueless by design. Even the sales assistants with fancy technical titles are short of clues. Easier for them to oversell when they don’t know what’s really going on underneath. We have to go 3 or 4 deep into their product org to find someone who will fess up to product issues that we are already aware of.
Also they now require you to become Microsoft Partner, if you want to use the oauth2 login not only for personal Microsoft accounts but also other Microsoft accounts such as those provided by a business running Office 365. They changed it ~ 3 months ago and are now not able to verify people in time. Even if they just want to check your name, email and maybe a profile picture and nothing else. The process is much less obvious than for the same thing with Google or Facebook. Even the ZIP-Code has to include spaces as if it was written on a letter, otherwise you will not be able to send the form, aaaand there is no hint about this requirement.
In general, Microsoft, Google and others are behaving really enterprise-y in that they are slow, you cannot reach anybody of importance to solve massive issues with their products and everything is actually quite expensive for what it does.
I use Aurora just for this reason. RDS is just a great product. Doesn't even need the cloud sql proxy equivalent since you can just install the authenticator into the database.
NO, RDS includes non-Aurora versions of many DBs, plus MySQL and Postgres-compatible Aurora servers, plus MySQL and Postgres compatible Aurora Serverless.
Thanks for clarifying. I'd associated RDS with the non-Aurora offerings of Mysql and Postgres. Judging by upstream version compatibility it appears Aurora is more heavily forked than their non-Aurora siblings.
I don't think Aurora MySQL/Postgres are just forks, I think they are a completely custom datastore behind a MySQL or Postgres-compatible interface (which probably uses a lot of non-engine code from the open-source base database.)
Regardless it looks like anyone choosing Aurora should not hold their breath for Mysql 8 or Postgres 10+ compatibility. Seems like only one major version bump has happened since they launched the first one (Mysql 5.6-to-5.7).
Which is fine. It can just be a little confusing as they drift and the caveats grow.
> Regardless it looks like anyone choosing Aurora should not hold their breath for Mysql 8 or Postgres 10+ compatibility.
Current Aurora Postgres is compatible with pg 12.4; pg 10+ support has been around so long that several versions that support 10+ have already been EOL’d by Amazon. Even Serverless, which lags behind, is on 10.x.
WAIT. Be careful. That is a super expensive product with a high likelyhood of lockin. It doesn't support all SQL features. Also! I run hundreds of GCP databases and never ran into: "but the mandatory downtime on their Postgres CloudSQL", maybe it is only Postgres?