There are lots of things for
people to argue over on HN. When it comes to unit tests, I think that you should do it, judiciously, of course.
I don't write unit tests when I'm sketching a program. Or when I'm just playing and when I'm not sure if this is exactly what I want. Sometimes, this is the case when I'm doing a feature as well, because chances are, the direction of the startup might change, so the tests and features that I write might not make the next revision.
Along with that is the discipline to shape up and write the unit tests when you know that you're going to be stuck with the program and feature for a while. It's far too easy to fall into the trap of not cleaning up a sketch. I believe this is especially true when you're working on a code base with other programmers. When you write a program, you're not just writing something for a computer to execute--you're writing something for your fellow programmers to understand. Unit tests definitely help with that.
The important thing is to know what your goal is, and write accordingly. Just as in drawing and writing, you can draft and sketch, and then start checking the elements of style when you know this is exactly what you want.
For the most part, I've stuck to the basic test framework Test::Unit, and it's worked pretty well for me as long as I was disciplined about it--use descriptive names, test one thing, keep it simple, etc. But currently, as the codebase got larger and larger, I found that I hated doing tests, and that's lead to some minor bugs slipping into production. Why was that?
Reading tests are a pain. There is often too many details when doing assertions, and when you're just skimming code, they all look the same. Tests are also rather repetitious. Sometimes, you have a scenario that you want to test with multiple parameters, like access for different users for different methods. You can either use cut and paste, which ends up with a lot of boilerplate.
Thoughtbot's Shoulda alleviates this somewhat by allowing you to have nested conditions. Cucumber has a test matrix, but the whole system is usually more cumbersome than I need.
But really, the most difficult part about writing tests is the setup. Setting up the conditions of the program so that you can run the test. Writing fixtures is a pain.
And reading this
post about OOL, I realized that it held parallels in testing with machinist blueprints of activerecord objects.
Joe Armstrong (Erlang) once said "The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."
When you have a banana class to test, and you'd like nothing better to instantiate it. But, in order to do so, you need to tie it to a gorilla, which in turn assumes that it's living in a jungle. This is what's meant by coupling objects to each other. It not only muddles up your design, but also makes testing way harder. When you're writing constructors, don't rely on having other objects passed in as much as you can. The only exception is when you're moving up a layer in abstraction.
The less that you have objects that need each other to get started, the better. Have defaults that make sense that the client object can adjust later. To avoid having to create special objects, and use them as parameters, avoid having data classes and use hashes and arrays instead. Just as Unix commands communicate by text, your class instantiations should communicate by arrays, values, and hashes. Every object would know how to read them without instantiating and passing in another class object.
I'm not exactly sure how to solve this testing problem. While you can design around it with the coupling of your code, I'm sure it won't get everything. Pure functional programming languages say they don't have this problem, but I don't have enough experience with one to know for sure. For now, I'll architect my way out of it, but there should be a better way.
Posted via email from The Web and all that Jazz