This Test Driven Development post was written by Orsi Banki, Transcend’s Scrum Master and Quality manager. Fair warning – it’s highly technical as she shows her level of expertise and care for our software and clients.
Test Driven Development (aka ‘TDD’). What is it? When and how should we use it? These were the questions I had when one of the suppliers I used to work with came up with the idea that he was going to develop software with this methodology. Since then several projects have been carried out using TDD. We have learned a lot from these projects – especially how and for what TDD should and should not be used.
Let’s start a little earlier and look at just what the TDD methodology is all about.
The method suggests 3 very simple steps:
1. Make the test, run it, and you’ll get a mistake.
2. Write code.
3. Run the test again and it’ll be successful.
So the methodology suggests the following:
This is a very common direction today as it speeds up the testing work. However, the maintenance time and the time of analysis can be large when we need to figure out if the error came from the automated test or the code. Also, automation engineers often say: ‘it’s worth automating what needs to be run many times (even on a daily basis) or for large amounts of data’. That is when the effort invested in developing the automated code pays off. And let’s be honest, sometimes the automation engineers are right 😉
So all the code will be tested (in principle). i.e. code coverage will be 100%. Point B raises an important point about test coverage, and this leads to more considerations that determine the success or failure of the rollout of TDD:
I’ll give you an example of something that doesn’t work. In our case, the idea was that a test automation colleague would prepare automated tests in parallel with the development. The tests were functional tests covering complete functions in an end-to-end way. It is all well and good, but overall, we had 2 problems with it.
The first problem was that the code and the tests were written by different people. You might think that’s good, because the test and the code can be parallelized and everything will be ready in less time. But it’s not the case.
On one hand, the method suggested by the Test Driven Development is not realized – take your test, run it, run to error, then create the code, run the test again, and be done.
On the other hand, you cannot guarantee that the tests you have prepared will cover all cases. Of course there are techniques to check code coverage after the test run, or whether all requirements are tested for each run-off. But let’s face it, these are all reactive methods and their cost-effectiveness is not the best.
The other fundamental question is, what level of tests should we write? In our case end to end functional tests were performed. About halfway through the project, it turned out that this idea was flawed.
From this moment, interface tests were carried out, as the new component to be supplied by the project had many interface connections. And even that wasn’t enough, as unfortunately, the tests simulated fake interfaces instead of mock interfaces.
The big difference between the two is that the mock interfaces return a burnt correct answer for all variations and the fake interfaces return a negative and a positive response.
But getting back to our example, the application of TDD is not for the replacement of the interface, functional or, perhaps, regression tests!
It was clear at the end of the project that unit tests had to be done using this method! If you use the method this way, it will fall into place. You take the Unit test, write the class, method, etc., and then run the test again. And at the end of the round, you’re refactoring. That’s it and no more.
If you use TDD in this way, you can be sure:
-everything will be tested 😊
-there is no unnecessary work
-and you can use the completed test kit to automate higher levels of tests (e.g. health check, regression, etc.).