You are here
A lazy yet surprisingly effective approach to regression testing
To regression test, or not to regression test
Building up a proper testing suite can involve a good amount of work, which I'd prefer to avoid because it's boring and I'm lazy.
On the other hand, if I'm not careful, taking shortcuts that save effort in the short run could lead to a massive productivity hit further down the road.
For example, let's say instead of building up a rigorous test suite I test my code manually, and give a lot of careful thought to its correctness.
Right now I think I'm OK. But how long will it stay that way?
Often my code expresses a significant amount of subtle complexity and it would be easy to break the code in the future when the subtleties of the logic are not as fresh in my mind.
That could introduce regressions, nasty regressions, especially nasty if this code goes into production and breaks things for thousands of users. That would be a very public humiliation, which I would like to avoid.
So in the absence of a proper regression test, the most likely outcome is that if I ever need to make changes to the code I will get paralyzed by fear. And that, at the very least, will increase the cost of code maintenance considerably.
So maybe a regression test would be a good idea after all.
Comparing outputs: minimum work, maximum result
With that settled, the only part left to figure out is how to do it in the laziest possible way that still works.
For TKLBAM I implemented a lazy yet effective approach to regression testing which should work elsewhere as well.
In a nutshell the idea is to setup a bit of code (e.g., a shell script) that can run in two modes:
-
create reference: this saves the output from code that is assumed to work well. These outputs should be revision controlled, just like the code that generates them.
For example, in tklbam if you pass regtest.sh the --createrefs cli option it runs various internal commands on pre-determined inputs and saves their output to reference files.
-
compare against reference: this runs the same code on the same inputs and compares the output with previously saved output.
For example, in tklbam when you run regtest.sh without any options it repeats all the internal command tests on the same pre-determined inputs and compares the output to the previously saved reference output.
Example from TKLBAM's shell script regtest:
internal dirindex --create index testdir testresult ./index "index creation"
Testresult is just a shell script function with a bit of magic that detects which mode we're running in (e.g., based on an environment variable because this is a lowly shell script).
Example usage of regtest.sh:
$ regtest.sh --createrefs $ regtest.sh OK: 1 - dirindex creation OK: 2 - dirindex creation with limitation OK: 3 - dirindex comparison OK: 4 - fixstat simulation OK: 5 - fixstat simulation with limitation OK: 6 - fixstat simulation with exclusion OK: 7 - fixstat with uid and gid mapping OK: 8 - fixstat repeated - nothing to do OK: 9 - dirindex comparison with limitation OK: 10 - dirindex comparison with inverted limitation OK: 11 - delete simulation OK: 12 - delete simulation with limitation OK: 13 - delete OK: 14 - delete repeated - nothing to do OK: 15 - merge-userdb passwd OK: 16 - merge-userdb group OK: 17 - merge-userdb output maps OK: 18 - newpkgs OK: 19 - newpkgs-install simulation OK: 20 - mysql2fs verbose output OK: 21 - mysql2fs myfs.tar md5sum OK: 22 - fs2mysql verbose output OK: 23 - fs2mysql tofile=sql
I'm using this to test cli-level outputs but you could use the same basic approach for code-level components (e.g., classes, functions, etc.) as well.
Note that one gotcha is that you have to clean the outputs you're comparing from local noise (e.g., timestamps, current working directory). For example the testresult function I'm using for tklbam runs the output through sed and sort so that it doesn't include the local path.
Instead of:
/home/liraz/public/tklbam/tests/testdir 41ed 0 0 /home/liraz/public/tklbam/tests/testdir/chgrp 81a4 0 e1f06 /home/liraz/public/tklbam/tests/testdir/chmod 81a4 0 d58c9 /home/liraz/public/tklbam/tests/testdir/chown 81a4 0 d58cc /home/liraz/public/tklbam/tests/testdir/donttouchme 81a4 0 4b8d58ab /home/liraz/public/tklbam/tests/testdir/emptydir 41ed 0 4b8d361c /home/liraz/public/tklbam/tests/testdir/file 81a4 0 d35bd /home/liraz/public/tklbam/tests/testdir/link a1ff 0 d362e
We get:
testdir 41ed 0 0 testdir/chgrp 81a4 0 e1f06 testdir/chmod 81a4 0 d58c9 testdir/chown 81a4 0 d58cc testdir/donttouchme 81a4 0 4b8d58ab testdir/emptydir 41ed 0 4b8d361c testdir/file 81a4 0 d35bd testdir/link a1ff 0 d362e
That way the tests don't break if we happen to run them from another directory.
Add new comment