A paradox of choices
At one of the first jobs as a software engineers, I had a privilege (or misfortune - depends on how you see it) of working at a startup where the mantra was in Move Fast, Break Things. Engineering was focused on shipping features out as quickly as possible, and testing was done on production. Naturally, with a crew of mostly freshly minted software engineers, we racked up technical debt pretty fast. Much of the code was held together with duct tape (metaphorically) at every layer in rush to push things fast.
Needless to say, bugs were frequent, and so were the customer complains. The business team were getting frustrated at the engineers, saying "Don't you guys test?".
Ouch, but they are right.
We weren't testing enough, and most of us also agreed that we probably sucked at testing because the work is so dry and developers have a bit of tunnel vision when it comes to their own work.
Automating tests sounds like a good idea.
Ok, so... where do we start, how should we do it, what tools should we use?
Some argued the best way is to start with Unit Tests, because then we can quickly isolate failures to individual components. Others argued that would take too long to get coverage for most of the code base, we should start with Integration tests instead and work our way down to Unit Tests later.
Some argued Test Driven Development (TDD) is the right way. Others argued that TDD is too time-consuming, and is a waste of time because requirements frequently changed for new features.
And then there were the tools.
The team was stuck with decision paralysis and decided to leave the discussion for a later time (which is usually when hell breaks loose again).
Where to start? Just start with the Scariest code.
Good thing, working a startup, one has a lot of autonomy and opportunities to experiment. (Or it could be a bad thing, when you have too many junior developers falling for The Shiny Object trap...). I went ahead and started writing tests and hooking them up to our build processes anyway.
Automating tests sounds like a tremendous undertaking when you have so much backlog, but all journeys start with a single step.
I found a good rule of thumb for deciding on where to start.
Identify the code that you are most scared of touching.
Chances are, you are scared of touching this code because:
- It's complicated, because there's a lot of conditionals, and it's probably not well documented.
- It's critical to the business.
I started with writing tests for our billing module. The calculations for computing how much a customer should be billed was complex. Unfortunately, the management team frequently wanted to change the billing model too. If the calculations were wrong, well, we'd lose a lot of revenue and trust from the customers. And I really don't want to get myself fired and possibly being liable as well for the financial loss...
Once you figured out what to test, it's actually pretty easy from there to figure out how and what tools to use.
For the billing module, I set up unit tests with JUnit since we had a Java codebase and it's pretty simple to hook the tests up into our Maven builds so these tests are ran every time a developer try to compile the application. API tests technically could cover the same breadth of tests, but I think getting immediate feedback on critical functionality was more important.
Once I had tests up and running, I wasn't afraid anymore to refactor the code to improve readability and maintainability. And it wasn't overwhelming for me to handover my work to another developer when I moved on to other projects and eventually to another job. I could simply point to the unit tests, and say here's your documentation, this is how the code works. Looking back, it made me realise that how true the saying "legacy code is code that is not tested" is.
So if you are thinking of automating tests, but it sounds overwhelming for you, try starting with the scariest code. Once you eat that frog, it gets easier.