>The company leveraged Instagram's existing monolithic architecture and quickly iterated to create a new text-first microblogging service in record time.
So they, in fact, had already built out Instagram as a monolithic service based platform and essentially forked it, tweaked it, tested it, and shipped.
The title of the article is highly misleading, suggesting they shipped from 0 to full platform in 5 months explicitly due to the fact they were using a monolithic architecture as opposed to a microservice based one.
0 to full platform in 5 months doesn’t seems impossible when you are Meta, especially for a Twitter-like.
Still impressive, but the hard part of launching a Twitter competitor is not technical, especially when you already have a virtually infinite amount of already competent developers and infrastructure and when you don’t need the product to make money asap.
The hard part is launching it commercially and giving it enough momentum.
Like the adage says, I can code Twitter in a weekend but it will not be Twitter and it will not magically make money.
I reckon I could implement 90% of the end-user functional features of Twitter in a weekend. The remaining 10% of the features would take at least another year, and the non-functionals (performance, reliability, observability, scalability, etc) would take me a lifetime. But that second sentence turns Twitter from a useful little tool into a valuable product.
Precisely. Most of the developers, including seasoned ones look at an application and say I can build it over a weekend. Yes you can. For your own use. For it to become a real product which is stable and scalable, takes much more time and effort.
I think you mostly describe the problem with modern web development. There are frameworks and tools that makes implementing Twitter in a weekend possible. What we need is to make the non-functionals (performance, reliability, observability, scalability, etc) part 10-100x easier to scale.
You’re right, I didn’t even think about these! I still wouldn’t include them in end-user facing features, but try are certainly necessary to be a valuable product at Twitter-scale.
> I reckon I could implement 90% of the end-user functional features of Twitter in a weekend.
Building a Twitter clone is an exercise that's frequently used in system design interviews. It's a popular exercise because it's a trivial system to implement to meet client-driven requirements, but the feature set becomes progressively more complex when scale, performance, and business requirements are thrown into the mix.
I think someone complained about twitter search to elon musk when he took it over saying how he could fix it easily. From memory, he left pretty quickly and said ihe was unable to fix it
The top comment is funny in retrospect. "He already removed the login prompt which was a huge annoyance as a read only user of twitter." Twitter is now significantly more unusable as a read-only user, as you can't see beyond the first tweet in a thread.
I use FB since cca 2008 (now much much less), and it has been various bugfest throughout the years, till these days. I don't mean unexpected confusing behavior but outright bugs. Comments added twice or not added, some images not uploaded into albums, stuff disappearing right in front of me when writing comment, double notifications for the same, page just dying and so on and on. No issues anywhere else, its not due to sloppy internet connection. I talk about desktop web client, removed anything from meta from phone long ago and not going back.
If it wouldn't be for the social part of connecting with long-lost folks from the past, I wouldn't use a service that behaved like a student project for decade and a half.
So actually shipping something working, and so fast, by same folks, is indeed impressive to me (not sure if same sort of basic bugs are there though, I am generally in direction of moving away from these 'social' platforms that actually in my view makes us as whole society more antisocial, we are simply built for physical interaction on many levels, we thrive with it and wither away without, even us introverts).
TBH, it could take more time to clone and tweak a micro-service architecture.
Arguably, they'd take another approach altogether in that case (keep some services in common and clone as few services as possible for instance), but it's still notable that they had the option to clone it wholesale in the first place.
Yeah this is like working for Google and launching "YouTube Texts" where some engineer removes all the actual videos leaving only the code for comments and calling it a new thing.
> Despite the apparent advantages of reusing Instagram's platform for Threads (much faster delivery time), Malkani admitted the company introduced a substantial amount of technical debt that must be addressed in the future.
If they forked Instagram for Threads I bet the codebase is WAY more complicated than it needs to be.
yeah, nothing special about infrastructure indeed... They just reuse IG infra and build features on top of it. Mentioning Microservices is so non-sense here, just for the purpose of clickbait.
Remember, microservices refers to the architecture of people. It is the emulation of the service architecture found in the macro economy (i.e. businesses buying and selling services with each other without any direct coordination), but localized in the micro economy of a single business.
There is a prevailing idea that large organizations do not have the communication capacity for many people to all work together, believing that more people require more meetings to coordinate, which at some point reaches a critical mass where any change requires more meetings than there is time in the day. Teams emulating the macro economy – a world where you can't get Google on the phone no matter how hard you try, where all you get is written documentation – is thought to be the solution to that problem.
So there would be something novel about multiple, disparate teams showing they can all work together under direct coordination while actually getting things done and not get bogged down in endless meetings. But, while the article is light on details, it appears that they actually did stick with a microservices model – creating a copy of Instagram so that the Threads team could provide an independent service without needing to communicate with the Instagram team...
I have worked in small companies and big companies, on monoliths and microservices. Good and bad. Everyone has already gone into the weeds yet again on that topic.
I'm just confused on why these types of stories keep coming up recently. I thought the barrage of Sqlite was a lot, but "monolith vs. not monolith" seems to be even more frequent.
What exactly is being discussed in these cases? Most large systems don't boil down to pure "microservices" or "monolith". Look at the diagram on that page ... big clouds that talk to little clouds, big clouds that talk to yet another in-house KV store labelled "database" (that has boxes inside of it), big clouds that talk to cylinders, that in turn talk to more cylinders.
Then you get to the description ... it's Python running Distillery (custom Django), talking to "WWW" (PHP), with data stored in TAO using UDB. Add sharded MySQL, ZippyDB and Async (just to get some serverless in there?).
All of this with a term I've never had to use once in all these decades, "Server-Driven UI (SDUI)". After reading the explanation, this seems to be returning UI as JSON to be put together on the client, as to avoid waiting for a release cycle for UI changes? This sounds a lot like HTML and CSS.
What does any of this have to do with monoliths vs. microservices (or not monoliths)? I don't even know what I'd label this system. 8+ different technologies spanning server and client, but referred to as a "monolith"? That term has lost all meaning to me.
Funny they ran into timezone issues and a lot of technical debt. Now that I can relate to.
> All of this with a term I've never had to use once in all these decades, "Server-Driven UI (SDUI)". After reading the explanation, this seems to be returning UI as JSON to be put together on the client, as to avoid waiting for a release cycle for UI changes? This sounds a lot like HTML and CSS.
> Despite the apparent advantages of reusing Instagram's platform for Threads (much faster delivery time), Malkani admitted the company introduced a substantial amount of technical debt that must be addressed in the future.
This is not the feel-good "ye olde ways are always better" post that the headline suggests. This is a quick-and-dirty ship it by tomorrow™ and we will worry about building it properly later.
This strikes me as funny: what could "properly" mean to an industry if a successfully launched and operational piece of software serving clients globally somehow doesn't meet the requirements.
I think this talk of "proper" is just purism, its not grounded in reality or sound engineering practices.
The article should be lauded as kicking off a new era of practical engineering that rejects software purism.
It's a live service. If it "serves clients globally" with the minimal basic functionality at launch, then that's a valid launch. But that's achieved by being a mess under the hood ("technical debt"-like stuff), it probably is expensive to operate, hard and brittle to maintain, and very difficult and slow to extend and expand.
That hardly meets all the requirements of a global and ambitious live service that has to move fast to gain traction against established, experienced and aggressive competitors.
It's a valid tradeoff and a successful launch, but it's ok to say it's "not properly built" for what comes next.
One of the requirements could be maintainability, and the fact that is launched and operational and serving clients globally tells you nothing about that.
You're basically arguing there is no such thing as tech debt, if a software works, then there can be nothing wrong with it. I would think this is obviously false.
This infoq.com article about a pile of apparently hastily built technical debt is going to “kick off a new era of practical engineering” where everyone “rejects software purism”?
They are not selling shrink-wrapped software. “Operational” is a moving, evolving target.
Meta were executing on sudden nascent demand for a well managed microblogging platform in the wake of Twitter's buyout. A 5 month copy/paste of one of their existing platforms was absolutely "building it properly" in that context.
They could have definitely spent 12-24 months building the platform diligently from naught, but they'd have the same end product, just 12 months too late. The demand could have been satisfied by any one of several players in the industry (Tiktok, bluesky, LinkedIn, Twitter itself, Snapchat, a resurrected MySpace, etc), so rushing to gain first-mover advantage was likely the singular consideration for the project.
If fixing technical debt after the fact costs more than 10months of operating. You have a problem. Not counting the extra hours needed to maintain an archaic codebase. This was more about the uncertainty of threads succeeding. It’s easier to swallow writing off a port than a brand new implementation.
They profited of it? Bold statement considering it is a meta product. I would be extremely surprised if they have achieved even a single dollar of profit.
You may've to provide some more info than a plain "No." The question GP made is sound since this is exactly what the article alludes to:
>Meta formed a small team to devise an approach to delivering a new service that would directly compete with Twitter in just a few months. Zahan Malkani, a software engineer at Meta, shared how his team was able to reuse the existing Instagram backend components, data stores, and large parts of the existing infrastructure stack but customize them to offer functionality comparable to what now is X.
And due to time constrains was even made sloppy:
>Malkani admitted the company introduced a substantial amount of technical debt that must be addressed in the future. The team is working on gradually separating the data models from Instagram's as the Threads service gains new functionality so that both platforms can be separated, but the process will take some time.
> "Monolithic Architecture" // shows diagram with like 5 boxes on it...
Those five boxes represent a web proxy/server, an async system (likely analgous to an AMPQ server, Celery, etc.), a caching server, and finally the DB server. That looks like a pretty standard monolith at scale to me?
Yes, each of those consisting of a number of (micro?)services of their own. The Async system alone is a monster. The question is can you build a monolith out of dozens of microservices? Answer - anything is possible with the right marketing spin
So what? GP was complaining that a monolith cannot possibly have 5 boxes. (It must have exactly one, I guess, in their vision). You can totally have a monolith that runs the same code base in 5 different "roles".
If they did package the same exact codebase in a single monster binary and ran it that way, calling it a “monolith” would be even more hilarious but that’s not what they did
Yup. The logistics startup I joined nearly 7 years ago should have done the same. A single VM of say 4 cores and 8 gigs of memory and about 100G of disk would have been plenty for our proposed PoC and it should have run as a monolith. That’s what I would do had I to do it again. Instead what we did was what I call HN driven development: k8s, microservices, cloud. 7 or so months after I joined and on my way out we had nothing to show for it except for a really strong and capable front end dev who was basically faking functionality to appease investors because we the backend folks were so incompetent. (Man I sure have come a long way from those days. I was dumber then. I’m still dumb now — just not as much.)
> what I call HN driven development: k8s, microservices, cloud
I’ll give you cloud, but my takeaways from HN in the last few years have been things like:
* Postgres is a reasonable choice for a job queue
* SQLite is a valid choice for server db at quite large scale
* k8s makes sense once you have 50+ devs
* microservices address issues of team scale, not traffic scale
Back in 2017 I think k8s was all the rage and all of us were reading it on HN I think.
But what I, too, have seen is a greater emphasis on the boring, simple tools here on HN and I love that given all the experience and PTSD from the startup.
I think these stories tend to be based on a bait-and-switch angle.
Minimum viable products only need to implement very basic functionality, just enough to get something out of the door. Limited tech stacks like Laravel/Django/Adonisjs can provide basic functionality, and sometimes a plain static HTML+CSS can serve that purpose as well.
Except that micro services are an organizational tool, which just so happens to have important technical traits. When you start to develop your MVP to actually do something relevant to your business, and you start to assign responsibilities and accountability to specific people running specific projects, you create a need to peel out functionalities and data sources and to isolate loosely couple them from the rest of the service. You start to have a need to have teams work independently and autonomously.
And that's where micro services come in.
There are plenty of system design exercises that ask you how would you implement Twitter, and the first iteration is a monolith with a RDBMS. It quickly grows beyond that. Why?
I'm not sure if your comment applies to Threads but if it does, and why not given the short time to launch, that Threads MVP had 100 M app downloads in the first 5 months.
It's 4 orders of magnitude more than any customer of mine got in all the lifetime of any of their products. It's one order of magnitude more than what the mobile phone operator I was working for 20 years ago had after years from launch.
I guess that it means that an MVP is all we need. OK, they started with the Instagram code base but as you write, the core of the product is a pretty standard exercise. I did it a few times myself for some customers along the years, Rails and PHP.
> I'm not sure if your comment applies to Threads but if it does, and why not given the short time to launch, that Threads MVP had 100 M app downloads in the first 5 months.
Irrelevant. I'm talking about the feature set, which is not determined by the volume of clients being downloaded.
> I guess that it means that an MVP is all we need.
The clients need content. The business, in order to make money, needs way more than that.
Here's the thing I really don't get about Threads... so now when you are on Facebook, Meta is constantly pushing you to Instagram. And within Instagram, they are constantly trying to push you to Threads. The whole thing is just a mess and as someone who pays for advertising and is trying to push content, there is nothing about it that is easy or understandable. Further - it's clear that the FB demographic is older, and I'm guessing skews female. Instagram is a bit younger and I guess you could make the argument is more video-oriented. Threads is like a wasteland - the only reason anyone has an account there is because of this weird feedback loop with Facebook and Instagram where you likely created one almost by accident. It's not at all clear why anyone would use it over Instagram other than the typical goldrush mentality of trying to claim your name there first. It's all so very messed up.
I have loved the Modular Monolith (or Modulith) approach with strict abstraction. It keeps it in same code base/deployment and at the same time keeps it modular enough to be extracted into independently networked services later.
recently shipped and it'll be complete when you can follow fedi accounts from Threads but my feeling is it will be much more restricted in that direction:
Word through the grapevine on Masto is they went looking for instances to federate with and pay them and people wouldn't partner up with Meta on principle.
No sane person would kick off a brand new application that expects huge traffic on Django + PHP, right?
This goes to show that technology stack is not as important as devs (myself included) sometimes made it out to be. And that old, boring, battle-tested framework and technologies can actually deliver.
I worked in a Bay area co-working incubator type place a few years ago. Everyone lol'd that I was self taught and went with a namecheap shared hosting, PHP, and jquery. What about AWS or Google cloud at least? What about node? What about some fancy front end framework (I forget their names as theres like 5 and they're all the same yet all different).
I saw several people drawn in their own tech stack drama, almost never creating a product.
I put some PHP together and have 34 companies paying.
It's a mess but I'd say that's more because it was my first real project, not because of php and jquery itself.
It's not a mess because it's your first real project, it's a mess because that's what happens when software meets the real world, and moving requirements. Software engineers like to use the term 'technical debt' which is a fancy way of saying a mess :)
That sounds nice but I have to assume that a self taught first timer that learned off YouTube videos is going to have 10x the technically debt of someone who's been doing this for several years and several iterations, and learned some lessons.
IMHO all 3 major cloud providers are overpriced if you have just a simple service. Go with something simpler like Hetzner or DigitalOcean, Vultr, or better yet fly.io or similar if you can.
AWS especially will just suck you in with all the complexity. If you barely have an MVP you’re doing it wrong if you’ve ever thought about IAM lol.
It's all too complicated. One of the people at the co-working place was an AWS expert with tons of certifications blah blah. He spent months and thousands of dollars spinning up some insane micro services architecture for a product in the end he never got built as the real owner grew disillusioned with how it was all going.
Price isn't the only thing to keep in mind.
Django + DRF is still fine, and is still a good choice. Especially for smaller projects that are expected to have a long lifetime with few resources available for maintenance.
Django is stable, well-known, and offers very long LTS. It's easy to find people who have experience on it, it is well documented and performs well enough for 95% of exiting websites.
You can quite easily convert DRF serializers to TS types/models, that you can then use in your preferred flavor of frontend framework.
I work with Django and while it’s true there is an unshakeable old-school vibe to it (Is it the docs? The SO posts from 2009? The source code with plenty of Python 2 era code?), other than proper async support, I don’t feel anything about it prevents me from making modern apps. I like my clients separate, so I do. And if I had to build a system with different services (without a specific async requirement), then I might as well patch a few Django services together…
I don’t have a dog in the fight, but it seems silly to use “what FAANG companies are able to make their tools do” to inform decisions for smaller companies.
I'm actually writing an argument against myself, because I'm usually closer to the "let's use the new shiny thing" camp. But that article is a nice reminder that the new shiny thing is not the only way to go and the "old, boring thing" is as capable if used right.
but Meta's versions are significantly different than the old, boring thing that you and I have access to. At my company, we are slowly rewriting our python services in golang because off the shelf golang incurs significantly less costs than off the shelf python on a per request basis.
>it seems silly to use “what FAANG companies are able to make their tools do” to inform decisions for smaller companies
Yes, and yet for many years now that's exactly the perpetual mistake that so many companies - both small ones and ones not in technology - would make because they assumed that if Google did it that way, it must automatically be the best.
> This goes to show that technology stack is not as important as devs
Facebook PHP isn't the same as the standard PHP engine though - they've spent 20 years optimizing their own compiler and PHP engine/infrastructure:
> Overall, our experiments demonstrate that HipHop is about 5.5x faster than standard, interpreted PHP engines. As a result, HipHop has reduced the number of servers needed to run Facebook and other web sites by a factor between 4 and 6, thus drastically cutting operating costs.
NB: that's from 12 years ago. HipHop didn't last long; a JITted PHP runtime (HHVM) replaced it at Facebook a few years later. Vanilla PHP also got a lot faster around versions 7 and 8.
The comparison was made in 2012 with PHP 5. PHP 8 is out now and likely includes most of Facebook's improvements. PHP 8 should be fast enough for any web app or backend.
I had a quick look and.. I have no idea. Frontend development at meta works differently than you probably imagine: things aren’t built locally but on dev servers or on-demand instances that all run services that take care of any build jobs in the background. I don’t know if there’s any nodejs involved at any point there or later in CI, but I suppose it’s possible. I still think it’s unlikely because my search turned up barely any mentions of nodejs, so I still think that my main point holds true: there’s very little node in use at meta, and nearly all the facebook frontend services are php (or python for Instagram/threads)
I've didn't think "no more PHP". I am just seeing how much non-PHP Meta open sources and talk with Meta guys at events who are building stuff for JS, talking about new client and varied tech stack and then you've told me "actually its still mostly PHP".
That’ll be because Facebook doesn’t use PHP as such, but Hack, which isn’t compatible with vanilla PHP. Hack isn’t really used much outside of Meta, so what’s the point of going through the extra work of open sourcing our stuff if nobody else will ever use it?
When you have buckets of money to throw at cloud spend, then yeah the tech stack might not matter quite as much. "Just throw more hardware at the problem" can sometimes work.
Even stuff that's been around for a bit; I can't speak for web, but on iOS I'm still seeing a lot of cases of people merely experimenting with SwiftUI and not actually switching to it, because UIKit is still more useful.
Is anyone advocating for "micro monoliths" - (building lots of little monoliths all deployed side by side)? That's the level of crazy the monolith versus microservices needs to get to.
Microservices are risk mitigation. If you know what are you doing - Monolith is optimal way.
We use microservices, because we acknowledge that we don't know how final product would look like, and we mitigate risks associated with adding and removing unknown number of features.
Facebook was creating clone of Twitter, well known product, well known path. There is no need for microservices.
I would actually put it exactly the other way round.
If you don't have a crystallized architecture yet, prototyping with a monolith is faster. You don't have to make the cuts (this belongs to this service) which are expensive to change later (unlike moving classes between modules).
Doing internal Java/.NET/whatever interfaces is easier than introducing external API (and associated infrastructure complexity, authorization, routing...), they are much easier to tweak. You have transactions, don't have to deal with a lot of asynchronicity, network overhead etc. For a prototype, I'd always rather do a monolith.
How do you lower risks of adding and removing feature if you need to not only need to add/remove the feature but also decide which service it goes in, set up coordination, and manage networks between the services? That's more work, and more surface area.
Actually for a big codebase there are always reasons to use microservices: much better scalability, easier to develop using small teams, easier to maintain, the software is more robust.
There are even more reasons to use a monolith, or to simply use a service oriented architecture.
Probably not necessary to repeat here, but microservices add a lot of complexity due to asynchronicity, limited interfaces, and more complex error handling.
If you don't know how to design schemas (extensible, modular, ..) and don't know how to design coherent APIs around that, then yes, go Microservices to mitigate the risk. The risk here being the likely chance that you got the schema wrong and end up hacking around it in your "monolith" [almost always sic].
I didn’t know Threads is still a thing. It is still very much barebones and frankly laughable for a product pushed out by a trillion dollar company. The fact that it took them five months is even more comical considering that all they did was pare down IG.
That's unrelated to the article. If the article was about "how to build SpaceX rockets" the equivalent comment could be "I bought some fertilizer and made my own rocket using PVC pipes".
So they, in fact, had already built out Instagram as a monolithic service based platform and essentially forked it, tweaked it, tested it, and shipped.
The title of the article is highly misleading, suggesting they shipped from 0 to full platform in 5 months explicitly due to the fact they were using a monolithic architecture as opposed to a microservice based one.