Code Coverage vs Test Coverage: Key Differences

“Code coverage” and “Test Coverage” is commonly (and wrongly) used interchangeably.

27. Aug 2024
·

While the terms are often used interchangeable, they are actually completely different aspects of software testing.

In short, code coverage measures the percentage of code that is executed by your test suite, and test coverage is a more manual type of measurement of how well your tests cover your application features, test cases, requirements and more. OtterWise is, amongst other things, used to track code coverage. In this article I will explain the main differences and how they are useful.

Code Coverage

As mentioned above, code coverage measures the percentage of code executed by tests. For example if you have 500 lines of code, and your unit tests run 400 of those lines, you will have a code coverage of 80%. This number however is not everything, and mindlessly aiming for 100% will generally not be a worthy endeavor. While it is valuable to have high code coverage, as it can indicate that your code is in a functional state, it does not prove that your application works as intended, your tests might but the code coverage number does not.

Code coverage can be more than just lines covered, it can also be branch coverage, which goes slightly deeper and tells you how many of your application paths are covered by tests. Paths are created when your code contains logical blocks such as “if”, which splits your code into at least 2 different paths: a true path, and a false path.

Test Coverage

This one requires manual work, and you generally cannot quantify it by a number, at least not as simply. This is due to test coverage being a completeness/test-quality measurement, and by default, your test runner won’t know what your application requirements, features or risks are.

To start, you should ask yourself 3 questions

  • Are all business requirements tested? (this is requirement coverage)
  • Are all features or user scenarios covered by tests? (this is feature coverage)
  • Have tests addressed the high-risk areas of the application? (this is risk coverage)

A good approach to starting thinking about test coverage can be making a Traceability matrix. This might sound complex, but in fact it can be done with a simple table. Below is an example of such matrix.

Requirement/Feature/Risk Test Case Description Status
Authentication Guests can create a new user Passing
Users can reset their password Passing
Checkout Process Orders can be made with simple products Failing
Orders can be made with variation products Failing
Payment Processing Payments are processed with valid information Passing
Payments fail with invalid information Passing

The above table explains that we have 3 main requirements/features; Authentication, checkout and payment processing, these are our core components, and in those we have some test cases we must have in order to consider our test coverage good.

Illustrating the differences

It might still not be entirely clear what the difference is, so let me make some examples from real code. Let us say we have a very simple user model that can login and logout , you can acquire 100% code coverage with a test such as this:

test('can log in and out', function() {
	$user = User::make();
	
	$this->assertTrue( $user->login( $password = '123456' ) );
	$this->assertTrue( $user->logout() );
});

Both methods are called, so technically you have full code coverage, but that does not mean all your business requirements, features or risks are covered. You could imagine some of the following scenarios also relevant to cover:

  • Attempting logging in with an invalid password
  • Logging out despite not yet being logged in
  • Logging in while already logged in
  • Session expiration

Without test coverage matrixes, it can become hard to keep track of if these cases are covered, since your code coverage might be fine, but certain logics not added or thought of.

A matrix for the above could look like this, notice how only the two first are passing since they are covered by the test case, but the rest have not been covered yet but we are aware of it in our matrix.

Requirement/Feature/Risk Test Case Description Status
Authentication Users can log in Passing
Users can log out Passing
Users are sent to log in page when session expires Failing
Users cannot log in while already logged in Failing
Users will receive an error if inputting invalid password Failing
Users must be logged in to log out Failing

Summary

  • Code coverage is about measuring how much of the code is touched by the tests.
  • Test coverage is about ensuring all relevant behaviors, requirements, and edge cases are tested, ensuring the application works as intended in real-world use.

This is an important thing to keep in mind, as you can have high code coverage while having bad test coverage, and test coverage is not a clear metric you can automatically see from looking at your code, in contrary it is designed by business logic, requirements, risks and features.

A well-structured application has both, usually starting from code coverage and building towards describing and monitoring test coverage.

OtterWise can help you keep track of code coverage, type coverage (another quality metric on top of code coverage), code complexity and much more.

Now in beta, is support for organising your test coverage matrixes and collaborating with your team, and automatically marking them in their appropriate statuses as your test suites run. You can reach out to our Support if you would like access for your organization.

Improve code quality today_

With OtterWise, you can track Code Coverage, test performance, contributor stats, code health, and much more.