Hacker News new | past | comments | ask | show | jobs | submit login

Maybe using a ticketing system [or just call it a project management system] is the right abstraction level.

If A and B has nothing to do with each other - other than for some circumstantial reason they consume data from each other, then why would we care if A or B starts to support a new output format?

If we want to do a format change for some reason, maybe it'll allow better security/traceability, then sure, make a project and track the tasks (like make A able to produce/consume new format, make B able to produce/consume new format, deploy A2 and B2 to test environment, promote to prod), but I don't see why would you track that on the source code versioning level.

A and B has separate tests to ascertain that they can deal with the new format, and then you do the integration testing, that might catch problems, that then should be covered by unit tests in A or B. (Or in a fuzzer for said format.)




> Maybe using a ticketing system [or just call it a project management system] is the right abstraction level.

> If A and B has nothing to do with each other - other than for some circumstantial reason they consume data from each other, then why would we care if A or B starts to support a new output format?

First, even if the coordination between A and B is recorded in the ticketing system, the coordination between F and G is probably not.

> I don't see why would you track that on the source code versioning level.

Pretend F and G are tables in a database (or other data storage system) if that makes it easier.

Where is the schema stored? Who records the migration path?

Many people like to record migrations in a version control system, but it is tricky to link those migrations to the (otherwise) independent A and B.

If these are file formats where exists the code to consume and produce them? Or network formats? The problem remains the same -- do we break this up into additional libraries?

That there's a very real ordering between release of A and B that isn't properly encoded, we're relying on process diligence (as opposed to tooling) to be correct.


If tables, then if they are in the same DB, they should be in the same project.

If they are independent tables, then I don't care, show me the API between the projects.

If these are file/network/serialization/wire/in-memory/binary/codec formats, then there are conformance checkers (passive and active, like fuzzers). Those are separate projects, but they can be used like tools during testing and development.

Rely on tooling to make sure that the stated goal of the project is reached. (It now supports F or G or X,Y,Z formats. It supports output-format G by processing input-format F. If that's a project requirement, test it in that project.)

You can use a top level repo for the integration tests. But it's no need to make it one flat repo.


> If tables, then if they are in the same DB, they should be in the same project.

Lock-stepping two otherwise unrelated applications because they both share support for a data structure is silly at best, and often impractical, especially if development for only one of the projects is "in-house". Consider the possibility that "A" is a commercial product produced by another company.

Anyway, it's my experience most software upgrades don't involve a schema change, so it's worth optimising for the common case, and supporting the difficult case.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: