- Use the language of the acceptance criteria (promise) to write a test
- Write a test for every promise you need to keep
- Writing tests teaches good language practices
- When inheriting a project, create a basic test suite as a baseline
- When migrating a project, migrate one thing at a time and use tests for integrity
If you ask my co-workers, testing is very close to my heart. As soon as I started writing tests, the crippling anxiety I faced when deploying to production disappeared over night.
This is my first blog post and I haven’t figured out a format yet, so I’m just going to dump all of my thoughts on testing here in one go and maybe expand on some things in future posts.
Promises
Tests are abstract promises. “I promise to always (do this)”. In life it’s easy to forget your promises, especially after a long period of time. Tests are there to make sure you keep your promise.
You also don’t want to keep a promise that you can’t keep. I see the purpose of grooming, and creation of promises (acceptance criteria) as the opportunity to ensure that the product owner and coder understand the contract they are about to commit to.
Promises are also baggage. It is in everybody’s interest that the baggage (code creep) is kept to a minimum, so grooming is another opportunity to simplify the contract.
Controversially, I think most unit tests are baggage, certainly on the projects I work on. It’s rare that a product team communicates that they need an exception to be thrown, but the unit tests will allow us to support the finer details of the requirements, let me explain…
The language of tests
The first learning curve somebody faces when starting off with tests, is the type of test that they need to write.
My favourite explanation is: It depends on the language used to build the requirements
| Test | Language | Criteria / Test name | Test steps |
| Acceptance Tests | The language of the user | I can signup for a trial account | I click “sign up” button I fill in my email I fill in my password I click “Register” I wait for message “Registration complete” I am on page “My account” |
| API Tests (Functional) | The language of an API (eg. RESTful) | Can list accounts | I send GET to “/api/v1/accounts” I see response is successful I see a list of accounts |
| Unit Tests | The language of code | Handles empty response from ApiName | I expect Validation Exception when I call (new User())->save() |
Which type of test should I write?
In short, having as many angles to break your code is important.
I often refer back to the testing pyramid when I’m trying to explain why all types of test are important. It’s a great article, I recommend a read.
My extended interpretation of the testing pyramid is how tests can support other tests, which can make them simpler.
For example, given the acceptance test “I can see validation message when I enter bad things”, we can assume that when a certain thing happens (for example a certain exception is thrown), the user will see a validation message…
Given this, we can write lots of shorter/faster unit test to ensure each scenario produces the excepted message (which we know the user will see based on our assumption).
Now, the assumption is a risk. The only “true” way of knowing each validation error will be visible to the user is to write an acceptance test for each one. But the value of each test beyond the first has diminishing returns but costs a lot more.
How long should I spend on tests?
Tests are unfortunately optional for most products. They don’t necessarily get the job done so the Business-logic allows us to ignore them.
This is where TDD comes in: Write tests to test your code. You are testing your code anyway (aren’t you?). This way, “testing” doesn’t take time, it is the time.
TDD is a great practice in theory, but in practice I don’t think it lends itself to flexibility because in the real-world requirements can be easily misunderstood (or even changed) by the Business mid-sprint, and it would create a lot of overhead
My pseudo-TDD is to write tests as I code. This has evolved from when I use to create a test controller to play with classes in, or use Postman to check my API works as-expected. It takes the same amount of time, provides a decent level of flexibility, and when I am finished, my tests are complete!
When I’m in the situation when I simply don’t have time to write tests (being rushed), I will just do the work, and create a skipped test. This way when the suite is run I can see a todo list of promises I’m not tightly enforcing (Not the end of the world)
Tests improve code
I use the tests to (1) test the code and (2) ensure it is nice to work with
To write a test is to dog-feed yourself your own code and API before you show it to the world. You are a chef tasting your food before you send it out, and when the food has been eaten you can’t make any more changes.
Dependency injection, small methods, early returns, meaningful code, meaningful exceptions, separation of concerns, and other good practices – All things that I really started to understand/appreciate after writing tests.
Inheriting projects which don’t have tests
I have a habit of inheriting code-bases that have no tests. This is a scary time for me, because I’m suddenly tasked with keeping promises that other people have made, but are no longer responsible for keeping.
When first tasked with this scenario was to analyse the code, and write a test for each objective thing I could see. It was a RESTful API so my tests were along the lines of “Lists Resources”, “Can Filter By Parameter”, “Fails when Query Parameter Is Not Provided”
I learnt a lot from the project doing it this way, but inferred invalid promises because I over-thought everything the code intended to do (Meaningful comments and git histories were pretty lacking, and the coders had since left)
Over-thinking led me to championing understanding/promises that was never a promise in the first place. In grooming/planning I felt like I was arguing for a ‘previous intention’ to work in a certain way, even though the products team didn’t ‘agree’ to this way. However, my tests ‘locked in’ the behaviour, making future modification difficult.
Example: “Fails when Query Parameter is not provided” is not RESTful. However I written a test that locks in what was probably a defect. Because this API had old phone clients that might have relied on this behaviour, I felt the need to keep the promise although I had no idea whether it was a promise in the first place!
What I’ve learnt: Champion as few promises as you are willing to keep when inheriting a test-less project. The definition of this will differ on a project-by-project basis, so maybe talk about how much time you are given to ‘understand’ what you have inherited
Migrating projects
Let’s start with saying it is easier to migrate one thing at a time, so let’s look at some types of migrations and see what tests support they work:
| Migrating… | Useful tests |
| Frontend | Acceptance |
| Backend (APIs) | API |
| Backend (Not APIs) | Unit Functional (Non-API) |
| Framework | Acceptance API |
| Datasets | Acceptance API |
| Refactoring code (BAU) | Acceptance Functional Unit |
| Views | Acceptance |
Often, migration will not be so simple, but when changing everything, having a “point of reference” is critical to reduce risk.
Testing red flags
When I see this in a test, I often flag it as something needs to be done, which is very situational.
- Tests fail randomly (Test isn’t running from a fixed state)
- When test fails, it is not clear exactly what has broken based on the name (Test does too much or test name needs tweaking)
- When test fails, it is not easy to replicate/understand why (Test is trying to be too clever or something needs simplifying)
- Seemingly unrelated part of the code breaks in test (Seperation of concerns or god class)
- Single test is slow (Something specific needs optimising or test does too much)
- Suite is taking too long to run (App is getting too big, break it down, or there too many tests)
- Tests break too easily (Either: test is not specific enough,
- Lots of skipped tests (Find the time to unskip them)
It’s just a list of things at the top of my head for now, I’ll probably expand on things in the future.