Hacker News new | past | comments | ask | show | jobs | submit login

Depends. Mobile tests that run on emulators are the worst. Unit tests finish relatively fast, integration tests that bring up server tend to be slow (30-40mins best case almost for these for projects im working on). The cost of this gets amortized two ways: you can run immediate unit tests manually on command line as the fastest signal. Then when you send for code review, presubmit runs. During code reviews you may choose to run them as you go. Eventually when you submit they run again.

If there has not been any changes to your commit/cl and there is an already passing submit, it will just skip and submit.




What can you possibly be doing in an integration test that takes 40 minutes?


Integration tests bring up servers, and once you have tens or hundreds, this happens.


Oh wow. We have a test tenancy that's carried throughout production, so you make requests against real backends (read/write data in the test namespace, sometimes read-only production data). There's a proxy in front doing rate limiting, endpoint whitelisting, audit logging, emergency lockdown, etc. I never thought of deploying a whole separate environment just for integration testing.

Still, seems you could keep a handful of integration test environments always running? Time spent waiting your turn for one of them could well be less than time spent spinning up a whole bunch of servers.


There is an effort to make everything hermetic. Namespacing is hard, but not always possible, and touching production servers (and potentially crashing them) could cause significant revenue damage.

I don't think all tests should be hermetic - the effort to make such things happen usually do not overweigh the time it takes to do them, but hey - that's what we are doing.


In a single integration test? That'd be pretty absurd.

At least in our project, each integration test has a certain amount of overhead. Some backends are fakes (when I request X, you provide Y), some are actually booted up with the test, e.g. persistence.

Multiply this across N integration tests, have lots of demand for the same CPUs, and you're up to 30-40 minutes of integration test time.

Though, that said, some integration tests can be crazy long if they have a lot of "waitFor" style conditions. "Do this, then wait for something to happen in backend Z. Once that's done, do this, and this, and this..."


>Multiply this across N integration tests,

But in theory with enough servers all the integration tests could be run in parallel. So it would only take as long as the longest single test.


The longest single test + the time it takes to setup the test environment (booting servers, etc).

Parallelizing tests has diminishing returns unless you manage to dramatically reduce the setup time.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: