Dave Astels on Why Your Code Sucks.
OK, so you decide to add tests to your code. That typically isn’t so easy. The odds are that your code wasn’t designed and/or implemented to be easily testable. That means you will have to make changes to it to make it testable… without breaking any of it (this is usually called refactoring these days). But how can you quickly and easily tell if you’ve broken anything without that comprehensive set of fine grained tests? Tricky.
It’s even trickier than that. If you take the approach that you’ll write tests for all your code after you write the code, you have to go to extra effort to try and design/write the code to be testable.
Maintaining an adequate level of testing when writing tests after the code being tested takes time and planning. And if you have to do additional work to make your code testable… well… that sucks.
Writing tests before you write the code means that the code is testable by definition. If you work one test at a time, you reap even more benefit, as each test only requires incremental changes to code that you know is solid.
This parallels an instructional design concept about evaluation. It says you need to devise your evaluation methods before you actually design your lesson. Bass-ackwards, but the more you think of it, the more it makes sense. Actually, this applies to learning tasks where it is easy to demonstrate mastery through behavior and where the criteria are clearcut. For more open-ended types of learning (writing, cultural studies), the instructional designer ought not to define how to measure student success. Why?
When I taught a literary theory class in Vlore, Albania, my students would regularly blow me away with their answers/analysis of literary questions. I was the smart teacher; they were the naive students; the problem was that my preconceived notions of how theoretical questions needed to be answered were ridiculously ossified. I found myself taking more notes of what my students were saying that they were.