Hacker Newsnew | past | comments | ask | show | jobs | submit | stuff4ben's commentslogin

Currently on a production promotion outage right now and reading this while we wait for some caches to be purged to see if it fixes things. Not quite the same and consequences here are much less than what those guys had to do. Still sucks either way...

Fascinating that we're still getting useful science out of almost 50 year old tech. I think New Horizons is the only other probe that's expected to go interstellar.


I really dislike this take and I see it all the time. Also I'm old and I'm jaded, so it is what it is...

Someone decides X technology is too heavy-weight and wants to just run things simply on their laptop because "I don't need all that cruft". They spend time and resources inventing technology Y to suit their needs. Technology Y gets popular and people add to it so it can scale, because no one runs shit in production off their laptops. Someone else comes along and says, "damn, technology Y is too heavyweight, I don't need all this cruft..."

"There are neither beginnings nor endings to the Wheel of Time. But it was a beginning.”


I'm a fellow geriatric and have seen the wheel turn many times. I would like to point out, though, that most cycles bring something new, or at least make different mistakes, which is a kind of progress.

A lot of times stuff gets added to simple systems because they're thought to be necessary for production systems, but as our experience grows we realize those additions were not necessary, were in the right direction but not quite right, were leaky abstractions, etc. Then when the 4-year-experienced Senior Developers reinvent the simple solution, they get stripped out - which is a good thing. When the new simple system inevitably starts to grow in complexity, it won't include the cruft that we now know was bad cruft.

Freeing the new system to discover its own bad cruft, of course. But maybe also some new good additions, which we didn't think of the first time around.

I'm not a Kubernetes expert, or even a novice, so I have no opinions on necessary and unnecessary bits and bobs in the system. But I have to think that container orchestration is a new enough domain that it must have some stuff that seemed like a good idea but wasn't, some stuff that seemed like a good idea and was, and lacks some things that seem like a good idea after 10 years of working with containers.


> I'm not a Kubernetes expert, or even a novice, so I have no opinions on necessary and unnecessary bits and bobs in the system. But I have to think that container orchestration is a new enough domain that it must have some stuff that seemed like a good idea but wasn't, some stuff that seemed like a good idea and was, and lacks some things that seem like a good idea after 10 years of working with containers.

I've grown to learn that the bulk of the criticism directed at Kubernetes in reality does not reflect problems with Kubernetes. Instead, it underlined that the critics are actually the problem, not Kubernetes. I mean,they mindlessly decided to use Kubernetes for tasks and purposes that made no sense, proceeded to be frustrated due to the way they misuse it, and blame Kubernetes as the scapegoat.

Think about it for a second. Kubernetes is awesome in the following scenario:

- you have a mix of COTS bare metal servers and/or vCPUs that you have lying around and you want to use it as infrastructure to run your jobs and services,

- you want to simplify the job of deploying said jobs and services to your heterogeneous ad-hoc cluster including support for rollbacks and blue-green deployments,

- you don't want developers to worry about details such as DNS and networking and topologies.

- you want to automatically scale up and down your services anywhere in your ad-hoc cluster without having anyone click a button or worry too much if a box dies.

- you don't want to be tied to a specific app framework.

If you take ad-hoc cluster of COTS hardware out of the mix, odds are Kubernetes is not what you want. It's fine if you still want to use it, but odds are you have a far better fit elsewhere.


> - you don't want developers to worry about details such as DNS and networking and topologies.

Did they need to know this before Kubernetes? I've been in the trade for over 20 years and the typical product developer never cared a bit about it anyway.

> - you don't want to be tied to a specific app framework.

Yes and no. K8s (and docker images) indeed helps you in deploying more consistently different languages/frameworks but the biggest factor against this is in the end still organizational rather than purely technical. (This in an average product company with average developers, not super-duper SV startup with world-class top-notch talent where each dev is fluent in at least 4 different languages and stacks).


> Did they need to know this before Kubernetes?

Yes? How do you plan to configure an instance of an internal service to call another service?

> I've been in the trade for over 20 years and the typical product developer never cared a bit about it anyway.

Do you work with web services? How do you plan to get a service to send requests to, say, a database?

This is a very basic and recurrent usecase. I mean, one of the primary selling points of tools such as Docker compose is how they handle networking. Things like Microsoft's Aspire were developed specifically to mitigate the pain points of this usecase. How come you believe that this is not an issue?


You just call some DNS that is provided by sysadmins/ops. The devs don't know anything about it.


I used to be that sysadmin, writing config to set that all up. It was far more labor intensive than today where as a dev I can write a single manifest and have the cluster take care of everything for me, including stuff like configuring a load balancer with probes and managing TLS certificates.


Nobody is denying that. But GP was saying that now with k8s developers don't need to know about the network. My rebuttal is that devs never had to do that. Now maybe even Ops people can ignore some of that part because many more things are automated or work out of the box. But the inner complexity of SDNs inside k8s in my opinion is higher than managing your typical star topology + L4 routing + L7 proxies you had to manage yourself back in the days.


> But GP was saying that now with k8s developers don't need to know about the network. My rebuttal is that devs never had to do that.

The only developers who never had to know about the network are those who do not work with networks.


I think a phone call analogy is apt here. Callers don’t have to understand the network. But they do have to understand that there is a network; they need to know to whom to address a call (i.e., what number to dial); and they need to know what to do when the call doesn’t go through.


Devs never had to do that because Moore's Law was still working, the internet was relatively small, and so most never had to run their software on more than one machine outside of some scientific use-cases. Different story now.


Which is why you often had to wait weeks for any change.

Hell, in some places, Ops are pushing k8s partially because it makes DNS and TLS something that can be easily and reliably provided in minimal amount of time, so you (as a dev) don't have a request for DNS update wait 5 weeks while Ops are fighting fire all the time.


> You just call some DNS that is provided by sysadmins/ops.

You are the ops. There are no sysadmdins. Do you still live in the 90s? I mean, even Docker compose supports specifying multiple networks where to launch your local services. Do you ever worked with web services at all?


> Kubernetes is awesome in the following scenario: [...]

Ironically, that looks a lot like when k8s is managed by a dedicated infra team / cloud provider.

Whereas in most smaller shops that erroneously used k8s, management fell back on the same dev team also trying to ship a product.

Which I guess is reasonable: if you have a powerful enough generic container orchestration system, it's going to have enough configuration complexity to need specialists.

(Hence why the first wave of not-k8s were simplified k8s-on-rails, for common use cases)


> Ironically, that looks a lot like when k8s is managed by a dedicated infra team / cloud provider.

There are multiple concerns at play:

- how to stitch together this cluster in a way that it can serve our purposes,

- how to deploy my app in this cluster so that it works and meets the definition of done.

There are some overlaps between both groups, but there are indeed some concerns that are still firmly in the platform team's wheelhouse. For example, should different development teams have access to other team's resources? Should some services be allowed to deploy to specific nodes? If a node fails, should a team drop work on features to provision everything together again? If anyone answers "no" to any of the questions, it is a platform concern.


I suspect it's a learning thing.

Which is a shame really because if you want something simple, learning Service, Ingress and Deployment is really not that hard and rewards years of benefits.

Plenty of PaaS who will run your cluster for cheap so you don't have to maintain it yourself, like OVH.

It really is an imaginary issue with terrible solutions.


> they mindlessly decided to use Kubernetes for tasks and purposes that made no sense...

Or... were instructed that they had to use it, regardless of the appropriateness of it, because a company was 'standardizing' all their infrastructure on k8s.


> it won't include the cruft that we now know was bad cruft.

There's no such thing as "bad cruft" - all cruft is are features you don't use but are (or were) in all likelihood critical to someone else's workflow. Projects transform from minimal and lightning fast, to bloated, one well-reasoned-PR at a time; someone will try to use a popular project and figure "this would be perfect, if only it had feature X or supported scenario Y", multiplied by a few thousand PRs.


I hope this isn't the case here with Rivet. I genuinely believe that Kubernetes does a good job for what's on the tin (i.e. container orchestration at scale), but there's an evolution that needs to happen.

If you'll entertain my argument for a second:

The job of someone designing systems like this is to decide what are the correct primitives and invest in building a simple + flexible platform around those.

The original cloud primitives were VMs, block devices, LBs, and VPCs.

Kubernetes became popular because it standardized primitives (pods, PVCs, services, RBAC) that containerized applications needed.

Rivet's taking a different approach of investing in different three primitives based on how most organizations deploy their applications today:

- Stateless Functions (a la Fluid Compute)

- Stateful Workers (a la Cloudflare Durable Objects)

- Containers (a la Fly.io)

I fully expect to raise a few hackles claiming these are the "new primitives" for modern applications, but our experience shows it's solving real problems for real applications today.

Edit: Clarified "original cloud primitives"


> Rivet's taking a different approach of investing in different three primitives based on how most organizations deploy their applications today:

I think your take only reflects buzzword-driven development, and makes no sense beyond that point. A "stateless function" is at best a constrained service which supports a single event handler. What does that buy you over Kubernetes plain old vanilla deployments? Nothing.

To make matters worse, it doesn't seem that your concept was thought all the way through. I mean, two of your concepts (stateless functions and stateful workers) have no relationship with containers. Cloudflare has been for years telling everyone who listens that they based their whole operation on tweaking the V8 engine to let multiple tenants run their code in how many V8 isolates they want and need. Why do you think you need containers to run a handler? Why do you think you need a full blown cluster orchestrating containers just to run a function? Does that make sense to you? It sounds like you're desperate to shoehorn the buzzword "Kubernetes" next to "serverless" in a way it serves absolutely no purpose beyond being able to ride a buzzword.


I don't disagree with the overall point you're trying to make. However, I'm very familiar with the type of project that is (seeing as I have implemented a similar one at work 5 years ago) so I can answer some of your questions regarding "How does one arrive at such architecture".

> Why do you think you need containers to run a handler?

You don't, but plenty of people don't care and ask for this shit. This is probably another way of saying "buzzword-driven" as people ask for "buzzwords". I've heard plenty of people say things like

       We're looking for a container native platform
       We're not using containers yet though.
       We were hoping we can start now, and slowly containerize as we go
or

       I want the option to use containers, but there is no business value in containers for me today. So I would rather have my team focus on the code now, and do containers later

These are actual real positions by actual real CTOs commanding millions of dollars in potential contracts if you just say "ummm, sure.. I guess I'll write a Dockerfile template for you??"

> Why do you think you need a full blown cluster orchestrating containers just to run a function?

To scale. You need to solve the multi-machine story. Your system can't be a single node system. So how do you solve that? You either roll up your sleeves and go learn how Kafka or Postgres does it for their clusters or you let Kubernetes most of that hardwork and deploy your "handlers" on it.

> Does that make sense to you?

Well... I don't know. These types of systems (of which I have built 2) are extremely wasteful and bullshit by design. A design that there will never be a shortage of demand for.

It's a really strange pattern too. It has so many gotchas on cost, waste, efficiency, performance, code organization, etc. You always look and whoever built these things either has a very limited system in functionality, or they have slowly reimplemented what a "Dockerfile" is, but "simpler" you know. it's "simple" because they know the ins and outs of it.


> To scale. You need to solve the multi-machine story. Your system can't be a single node system.

Why can't it be? How many customers do you have that you can't deploy a bunch of identical workers over a beefy database?

Companies spend so much time on this premature optimization, that they forget to actually write some features.


> You don't, but plenty of people don't care and ask for this shit. This is probably another way of saying "buzzword-driven" as people ask for "buzzwords".

That's a fundamental problem with the approach OP is trying to sell. It's not solving any problem. It tries to sell a concept that is disconnected from technologies and real-world practices, requires layers of tech that solve no problem nor have any purpose, and doesn't even simplify anything at all.

I recommend OP puts aside 5 minutes to go through Cloudflare's docs on Cloudflare Workers that they released around a decade ago, and get up to speed on what it actually takes to put together stateless functions and durable objects. Dragging Kubernetes to the problem makes absolutely no sense.


Where did Nathan say he's using Kubernetes? I think I missed something. His comment describes a new alternative to Kubernetes. He's presenting stateless functions and stateful actors as supplementing containers. He knows all about Cloudflare Workers -- Rivet is explicitly marketed as an alternative to it.

It feels like you didn't really read his comment yet are responding with an awful lot of hostility.


I currently need a container if I need to handle literally anything besides HTTP


> I currently need a container if I need to handle literally anything besides HTTP

You don't. A container only handles concerns such as deployment and configuration. Containers don't speak HTTP either: they open ports and route traffic at a OSI layer lower than HTTP's.


Yes! All I was trying to say:

Containers can contain code which open arbitrary ports using the provided kernel interface whereas serverless workers cannot. Workers can only handle HTTP using the provided HTTP interface.

I don’t need a container, sure, I need a system with a network sockets API.


FWIW, Lambda takes the opposite of your assertion: there are function entrypoints and the HTTP or gRPC or Stdin is an implementation detail; one can see that in practice via the golang lambda "bootstrap" shim <https://pkg.go.dev/github.com/aws/aws-lambda-go@v1.49.0/lamb...> which is invoked by the Runtime Interface Emulator <https://github.com/aws/aws-lambda-runtime-interface-emulator...>

I don't have the links to Azure's or GCP's function emulation framework, but my recollection is that they behave similarly, for similar reasons


Oh yes! I was thinking about the V8 isolate flavor of stateless functions (Cloudflare, Fastly, etc). I had forgotten about the containerized Linux microVM stateless functions (Lambda, Cloud Run, etc). They have everything, and my favorite use is https://github.com/stanfordsnr/gg

Funnily, enough, the V8 isolates support stdio via WASM now


> Workers can only handle HTTP using the provided HTTP interface.

Not true. Cloudflare Workers support Cron triggers and RPC calls in the form of service bindings. Also, Cloudflare Queues support consumer workers.

Claiming "Workers can only handle HTTP" is also meaningless as HTTP is also used to handle events. For example, Cloudflare Queues supports consumers using HTTP short polling.


You forgot email, too!

But I still can't handle SSH or proxy WireGuard or anything like that (yet!)


While I think it’s great that kubernetes standardized primitives and I do love that IMO its best “feature” is its declarative nature and how easy it is to figure out what other devs did for an app without digging through documentation. It’s the easiest thing to go through a cluster and “reverse engineer” what’s going on. One of the legacy apps I’m migrating to kubernetes right now has like 20 different deployment scripts that all do different things to get a bunch of drupal multi sites up and running correctly whereas the kubernetes equivalent is a simple deployment helm chart where the most complicated component is the dockerfile. How does Rivet handle this? If I give 100 devs the task of deploying an app there do they get kinda fenced into a style of development that’s then simple to “reverse engineer” by someone familiar with the platform?


It’s also possible for things to just be too complex.

Just because something’s complex doesn’t necessarily mean it has to be that complex.


IMHO, the rest of that sentence is "be too complex for some metric within some audience"

I can assure you that trying to reproduce kubernetes with a shitload of shell scripts, autoscaling groups, cloudwatch metrics, and hopes-and-prayers is too complex for my metric within the audience of people who know Kubernetes


Or too generic. A lot of the complexity if from trying to support all use cases. For each new feature there is a clear case of "we have X happy users, and Y people who would start using it if we just added Z". But repeat that often enough and the whole things becomes so complex and abstract that you lose those happy users.

The tools I've most enjoyed (including deployment tools) are those with a clear target group and vision, along with leadership that rejects anything that falls too far outside of it. Yes, it usually doesn't have all the features I want, but it also doesn't have a myriad of features I don't need


> It’s also possible for things to just be too complex.

I don't think so. The original problem that the likes of Kubernetes solves is still the same: setup a heterogeneous cluster of COTS hardware and random cloud VMs to run and automatically manage the deployment of services.

The problem, if there is any, is that some people adopt Kubernetes for something Kubernetes was not designed to do. For example, do you need to deploy and run a service in multiple regions? That's not the problem that Kubernetes solves. Do you want to autoscale your services? Kubernetes might support that, but there are far easier ways to do that.

So people start to complain about Kubernetes because they end up having to use it for simple applications such as running a single service in a single region from a single cloud provider. The problem is not Kubernetes, but the decision to use Kubernetes for an application where running a single app service would do the same job.


Because of promo-driven, resume-driven culture, engineers are constantly creating complexity. No one EVER got a promotion for creating LESS.


I've also been through the wheel of complexity a few times and I think the problem is different: coming up with the right abstraction is hard and generations of people repeatedly make the same mistakes even though a good abstraction is possible.

Part of it comes from new generations not understanding the old technology well enough.

Part of it comes from the need to remake some of the most base assumptions, but nobody has the guts to redo Posix or change the abstractions available in libc. Everything these days is a layer or three of abstractions on top of unix primitives coming up with their own set of primitives.


Just write the stuff you need for the situation you’re in.

This stupid need we have to create general purpose platforms is going to be the end of progress in this industry.

Just write what you need for the situation you’re in. Don’t use kubernetes and helm, use your own small thing that was written specifically to solve the problem you have; not a future problem you might not have, and not someone else’s problem. The problem that you have right now.

It takes much less code than you think it will, and after you’ve done it a few times, all other solutions look like enormous Rube Goldberg machines (because that’s what they are, really).

It is 1/100th of the complexity to just write your own little thing and maintain it than it is to run things in Kubernetes and to maintain that monster.

I’m not talking about writing monoliths again. I’m talking about writing only the tiny little bits of kubernetes that you really need to do what you need done, then deploying to that.


> I’m not talking about writing monoliths again. I’m talking about writing only the tiny little bits of kubernetes that you really need to do what you need done, then deploying to that.

Don't limit yourself like that. A journey of a thousand miles begins with a single step. You will have your monolith in no time

Re-implement the little bits of kubernetes you need here and there. A script here, an env var there, a cron or a daemon to handle tings. You'll have your very own marvelous creation in no time. Which is usually the perfect time to jump to a different company, or replace your 1.5 year old "legacy system". Best thing about it? no one really understands it but you, which is really all that matters.


You and I do things differently I guess.

The things like this that I write stay small. It is when others take them over from me that those things immediately bloat and people start extending them with crap they don’t need because they are so used to it that they don’t see it as a problem.

I am allergic to unnecessary complexity and I don’t think anyone else that I have ever worked with is. They seem drawn to it.


See also: JavaScript frameworks


Rivet may be the best of both worlds, because it's not only another complex project built to simplify the complexity of doing complex things, but you also get to have to write all your management in TypeScript.


If 1.2 billion dollars in valuation was destroyed in 49 days because the CTO wasn't there, there's something to be said about the CTO's inability to delegate and ensure they have a team that supports their decisions and vision and can carry on without them. "When you do things right, people won't be sure you've done anything at all."


The device was always doomed. They launched a direct competitor to the iPad with maybe 10% of the functionality. This article is just hubris on the CTO's part ("if only I had been around for the launch instead of my incompetent team, everything would have worked out").


One other thing to point out is that the entire tablet market only exists today due to re-use of the phone ecosystems. Just look at any popular app on a tablet - they all have massive borders/sidebars and within those it's just the phone app as-is. Not even Facebook makes a dedicated tablet app. It's all just the phone app ported across in a very crude way. The simple fact is that the tablet market isn't big enough to be independent of the phone ecosystem.

The CTO here proudly says he convinced the board to buy Palm and get into the tablet market but just thinking about this even lightly i'm not sure it was wrong for the CEO (and subsequently CTO) to be kicked out for this move. It's weird there's no hubris on this. A tablet market without re-use of a larger markets app ecosystem seems like poor strategic thinking to me.


> Just look at any popular app on a tablet - they all have massive borders/sidebars and within those it's just the phone app as-is.

What apps are you using? That's not the case for any of the iPad apps I use anymore, though early on it was fairly common since quick ports could be made by checking the "release for iPad" box or however it worked back then. That was 15 years ago, though, things have changed quite a bit since then.


Android has unfortunately trended the opposite direction - apps that once had tablet UIs dropped them in favour of big phone UIs as they did redesigns around 2016-2020. I dropped out of the Android tablet world in 2019 for a Windows tablet, and most recently went for an iPad this year, so maybe Android has recovered ground there, but judging by how few Android tablets are actually on the market, I wouldn't be hopeful.


There is a difference between iPad and iPhone apps. The former run full screen, and the latter letterboxed.

I don’t use iPhone apps on this iPad Mini, they are too painful. I use the Instagram and Blue Sky web sites instead.


I don't use either of those services so I was unaware. So I guess there are still some apps out there without a proper iPad interface. I haven't encountered any in at least a decade though and they seem to be in the minority. Apple has gone to great lengths to make it easy to at least make something that fits on the iPad even if you don't try to make it properly native and use the screen real estate effectively. So that strikes me as laziness on the part of the Instagram and Bluesky app developers to not even try.


One could make the case that nobody needs tablet apps because web apps work well on tablets -- without annoying notifications or annoying popups to access privacy violating features [1], without adding clutter to an already too cluttered "desktop" of icons that all look the same, etc.

[1] that nobody in their right mind would click on, but I guess somebody with dementia might...


I don't totally disagree, though I dislike most web apps because, well, they require an internet connection too often (if not always). And I don't trust their creators to be any better at not violating privacy (my data is typically stored on their servers, after all).

With that said, I'm not sure what you're replying to in my comment.


Your overall point might be correct, but some of your specifics are incorrect:

>Just look at any popular app on a tablet - they all have massive borders/sidebars and within those it's just the phone app as-is.

None of the apps I am using on my iPad have borders/sidebars.

Gmail and Youtube have long had dedicated iPad apps. DeepSeek has one (a well designed and implemented one) for interacting with its chat service. The last time I checked, Google Gemini had only an iPhone app, but I checked again today and found a full-fledged iPad app.

Even my credit union, which operates only in California and does not have any physical branches in Southern California, has a full-fledged iPad app.


HP also shipped two Palm phone devices with webOS, Veer and Pre 3. They would have been more than able to create a complete mobile (and consumer electronic!) ecosystem.


Had the iPad not launched immediately opposite it, I can envision a world where HP goes through two or three revisions and has a solid device with it's own "personality" much like how Microsoft has their "Surface" line of glued-together tablets and "laptops" which sorta compete with the iPad and Macbook Air even if they hardly market them. The fact that Microsoft eventually succeeded in the space seems to indicate HP could have as well. I can see the business case where the new CEO isn't interested in rubber stamping a new product line that's going to lose money for him every quarter for the next three years against the glowing sun that is the iPad. There are better ways to burn political capital as a C level.


The thing with Windows tablets and Android tablets is in both cases the software development only has to justify its net increase in spend over just doing phone apps, but since HP didn't have a good market of phone apps to begin with, they'd basically need to justify the entire software development cost, on lower sales.


Taking the story at face value, the issue isn't necessarily delegation. If the C-suite is making a decision and one of their primary people (CTO in this case) is absent, it almost doesn't matter who he delegates to. The delegated individual is not their peer, so whatever they say will be discounted. I've been in that situation (as the delegated individual) several times. It's frustrating. Even if they respect you, you don't get a vote in the final decision.


In a company like HP at that moment...

* Might the same decisions have been made, even if the CTO were there?

* Would the CTO have one or more (SVP? VP?) people ramped up on the technical/product, and able to take a temporary acting-CTO role on that?

* Would there have been any sharp-elbow environment reason not to elevate subordinates temporarily into one's role and access? (For example, because you might return to find it's permanent.)

* What was the influence and involvement of the other execs? Surely it wasn't just CTO saying "buy this", CEO saying "OK", and then a product and marketing apparatus executing indifferently?


That'd imply a bus factor of 1 in most analyses right?


This is my reminder to add to my family's emergency stockpile for the month. Probably should do some rotating of my stock too.


Isn’t the move to never rotate stock? You’d probably be happy with that 10 year expired can of beans if you were reduced to just that in your resource war scenario.


I've never been a fan of JRPGs, preferring things like BG3, Mass Effect, Dragon Age (not the new one), and Skyrim. But after 800 hours in BG3, I think it's time to try something else and this looks like it might be that.


So what's the "Speed Queen" equivalent for dishwashers and other home appliances?


I still love my Jenkins Pipelines, Groovy shared libraries, and Bash scripts. It may be old, the UI may be a little crusty, but it's well understood, it just works, and I don't think much about the tool anymore.


I know several pointy haired bosses in real enterprise IT shops who would jump on this. Because everything is run on Excel/Google spreadsheets.


I loved playing No One Lives Forever 1&2 on my Voodoo 5 5500. That was the height of my PC building days. Now as a wizened old man, I'm stuck with these Apple Macbook Pros/Airs and they do well enough. But I do miss building my own machines...


This was posted on HN the other day. Enjoy!

http://nolfrevival.tk/


FWIW, you can build a fully functional desktop for ~$400 with integrated graphics (that can play most modern games on lower settings), or maybe $600 with a discreet GPU. Less if you go with used parts.


How wizened? If you are close to retiring, maybe you can build a pc and play some games. Keep the brain running, and stay in touch with friends (if they’ll do multiplayer).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: