Hacker News new | past | comments | ask | show | jobs | submit login

I'm curious as to what exactly you mean. Can you give some examples? If your're frequently making large-scale changes, I'd spend more time worrying about why you're having such a hard time nailing the requirements down.



If you've only worked on projects with nailed-down requirements, you're probably not working on the sorts of projects most HN people face. The requirements change because the world changes, or our understanding of it. That's the nature of the startup. Stable codebases serving stable needs don't need as much refactoring, that's true. And in those cases units might be a waste of time. But for those of us (the majority, I'd wager, at least around here) who work on fast-moving, highly speculative projects, they are an absolute godsend.


A framework previously designed to work on a single device was ripped apart and several key elements were made to run over a network remotely instead. (That may sound trivial in a sentence, but if anyone ever asks you to do this, you should be very concerned.) The framework was never designed to do this (in fact I dignify it with the term "framework"), and tight coupling and global variables were used throughout. This was not a multi-10-million line behemoth, but it was the result of at least a good man-century of work.

As mentioned in my other post, first I had to bring it under test as is, then de-globalize a lot of things, then run the various bits across the network. Also testing the network application. Also, by the way, releases were still being made and many (though not all) of the intermediate stages needed to still be functional as single devices, and also we desire the system to be as reverse-compatible as possible across versions now spanning over a year of releases. (You do not want to be manually testing that your network server is still compatible with ~15 previous versions of the client.) And there's still many other cases I'm not even going into here where testing was critical.

The task I'm currently working on is taking a configuration API that has ~20,000 existing references to it that is currently effectively in "immediate mode" (changes occur instantly) and turning into something that can be managed transactionally (along with a set of other features) without having to individually audit each of those 20,000 references. Again, I had to start by putting the original code under a microscope, testing it (including bug-for-bug compatibility), then incrementally working out the features I needed and testing them as I go. The new code needs to be as behavior-similar as possible, because history has shown small deviations cause a fine spray of subtle bugs that are really difficult to catch in QA.

I could not do this without automated testing. (Perhaps somebody else could who is way smarter, but I have my doubts.) The tests have already caught so many things. Also, my first approach turned out wrong so I had to take another, but was fortunately able to carry the tests over, because the tests were testing behavior and not implementation. (Also it was the act of writing those tests that revealed the performance issues before the code shipped.)

This isn't a matter of large-scale requirement changes on a given project. This is a matter of wanting to take an existing code base and add new features that nobody thought of when the foundation of the code was being laid down 5-7 years ago. (In fact, had they tried to put this stuff in at the time it would have all turned out to be a YAGNI violation and would have been wrong anyhow.) Also, per your comment in another close-by thread, the foundation was all laid down prior to my employment... not that that would have changed anything.

The assumption that large-scale changes could only come from changing requirements is sort of what I was getting at when I was talking about how the limitations of not-using-testing can end up internalized as the limitations of programming itself.

Might I also just say one more time that testing can indeed be used very stupidly, and tests with net negative value can be very easily written. I understand where some opposition can come from, and I mean that perfectly straight. It is a skill that must be learned, and I am still learning. (For example: Code duplication in tests is just as evil as it is in real code. One recurring pattern I have for testing is a huge pile of data at the top, and a smaller loop at the bottom that drives the test. For instance, testing your user permissions system this way is great; you lay out what your users are, what the queries are, and what the result should be in a big data structure, then just loop through and assert they are equal. Do not type the entire thing out manually.) But it is so worth it.


That's an amazing story.

Few questions:

1) How many lines of code is in that man-century project? Is the number of lines of code ~proportional to the number of man hours, or lines(man-hours) function is ~ logarithmic?

2) How does your typical project look like (or how does that project look like) in terms of testing vs coding? Do you spend few months of covering old code by tests and only then start testing? Or you do "add tests - add features - add tests - add features - ..." cycle?

What's the proportion between time spent on writing tests and writing code?

3) What's the proportion of time you spend directly working (analyzing requirements/testing/writing code) and generally learning (books, HN, etc.)?

4) Do you do most of the work yourself or you mostly leading your team?

5) How do you pick your projects, and when you pick them - what are your relationships with the clients: Fixed contract? Hourly contract? Employment?

Thanks!


So, at the end of the day, you never actually did design-from-scratch work, and instead used tests to verify incremental design improvements (key part: verify not create)?


New hacker news rule: if you haven't done at least 80% of what he's talking about, you can't dick-measuring-contest him.

From scratch work is the easier part of programming.


I've done this before, friend. Starting from scratch is indeed easier.

The point I was making was that he used unit tests to confirm his design (as a safety net) and not as a primary design tool.


Starting from scratch does not take into account all the growing pains the previous software hat that made it into the quagmire you have learned to hate.


The sum total of the improvements were not incremental. Testing helped give me a more incremental path, but from the outside you would not have perceived them as such.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: