There are hundreds of different kinds of software testing types around. Why is that? Because every functional and non-functional requirement of a software product needs to be tested, otherwise one might face lots of unpleasant surprises once the changes go live. But do we really need all of them? Probably not. However, we should know about the basic testing types and approaches that exist. Only then we can make an informed decision on what we should incorporate into our own testing strategies.
Which testing types are out there?
Take a look at the following graphics. It will guide us on our way through the software testing jungle. Basically, there are two questions that help us categorize the type of testing we want to employ:
- What are we testing? Or in other words what is the purpose of our test?
- And: How are we testing?
When looking at the purpose of a testing activity we distinguish between functional tests and non-functional tests. Functional tests typically check if the features of our software work as intended. Non-functional tests take non-functional requirements of the software under consideration. The most common non-functional testing types are:
- Performance tests check if the software can handle a large amount of data.
- Usability tests check if the software is easy to use.
- Security Tests check if the software is secure.
- Accessibility tests check if the software is accessible for impaired users.
Next, there are different options of how these tests are performed:
Which grade of automation is applied?
The grade of automation determines whether we are doing manual testing, semi-automated testing or automated testing. With pure manual tests there is no automation at all and the tests are performed by a tester or software developer by hand. With semi-automated tests there are some steps automated, but they still require one or more manual steps. Finally, with automated tests there is no human interaction at all. In state-of-the-art projects it is typically a goal to have a high grade of automation. That’s because manual testing cycles slow down development dramatically.
On which level are the tests performed?
Then, one can perform tests on different amounts of the whole system. When we look at the individual components of our software system in isolation we are talking about unit tests. When we verify that one or more components work properly together we are performing an integration test. Finally, with system tests we can verify that the completely deployed software product is working as expected.
Is the source code considered?
Testing can be done with or without knowledge about the underlying source code. Pure black box testing means that the source code is not taken under consideration at all. White box testing means that the tester or tool analyzes the source code directly to find problems. Finally, gray box testing is a mixture of both, meaning the source code is partially available for testing.
Is the software executed?
Whether we execute the software during testing or not determines whether we are performing a static or dynamic test. Static testing means the software is not executed. A well known example of static testing is a code review or applying a static analysis tool such as a linter on the code base.
Are employees or customers involved?
Finally the degree to which end-users are involved determines whether we are talking about alpha, beta or gamma tests. Alpha tests only involve in-house development. Beta tests usually involve a limited, selected number of end-users. Gamma tests are typically performed shortly before a release and changes are only performed in case high priority bugs are detected. At Symflower we do not perform any gamma tests. Our rolling release model replaces the typical, classic release management.
If we test for instance that our product sends a confirmation email after successful registration of a new user and we automated that procedure then we have an “automated functional system test”.
OK - it is nice to have a way to categorize our tests in all those sub-categories. But, the more interesting question is “What and how should we test?” The answer to that is as so often in software development: “It depends….".
Finding our way through the software testing jungle
In order to choose the right testing strategy we propose to apply the following procedure.
First, we figure out “What we need to test”. Answering this question is, of course, highly dependent on the project we are working on. Basic functional testing, i.e. figuring out whether the basic features of the product work is a must-have. Next, non-functional requirements that are absolutely necessary for our particular software solution should be tested. When listing the functional and non-functional requirements, e.g. “A user can login”, then we need to keep in mind that each requirement that is not tested potentially does not work in production.
Next, we need to figure out how we want to ensure that our selected requirements hold. And that step is even harder to answer than the first one, as there are several pros and cons that we need to take into consideration:
- Manual vs. automated testing: Manual tests typically do not require much upfront investment. But, they require manual labor every time a change is made to the code base. Automated tests on the other hand require an upfront investment but pay off eventually. It is typically easier to make changes to a code base with the safety net of automated tests that immediately verify made changes. Also, we can employ faster development cycles because it is possible to find bugs early on.
- Unit vs. integration vs. system tests: The levels on which we test are not competing testing types, but rather complimentary. The hard decision here is to choose which functionality is best tested on which level. A good best practice to select the right level is the testing pyramid. It tells us to test on the lowest level possible. Tests on lower levels are usually easier to write and maintain and also faster to run and easier to debug.
- Black box vs. white box testing: A complete black box testing approach is a bad idea, because we cannot be sure that we indeed tested all essential parts of the code base. If we have 10 functional black box tests in place we do not know which cases within the source code we tested, or if we missed a critical bug. We propose to have at least a code coverage metric in place that gives us an indicator of how “much” code is actually covered by the test suite. To learn more about the different coverage metrics take a look at our blog post What are the different code coverage types?. Additionally, adding more automatic white box testing solutions, such as static analysis and test generation, can help to dramatically improve your code quality and keep the quality consistently high.
The above graphics shows schematically how this procedure can be applied. First, we list all relevant use cases that need to work. Next, we figure out how we are going to make sure they behave as expected.
Conclusion
All the existing testing types look overwhelming on first sight, but by sticking to the procedure listed above you have a good starting point to find your way through the testing jungle.
If you enjoyed this article, please subscribe to our newsletter to be notified for future posts on coding, testing and new features of Symflower. Feel free to share this article with your colleagues and friends via Twitter, LinkedIn or Facebook, if you want to help make their software testing a little less painful and more rewarding.