[REPORT] From Vision to Code: A Guide to Aligning Business Strategy with Software Development Goals is published!
GET IT here

Test-Driven Development (TDD) – Quick Guide [2024]

readtime
Last updated on
January 11, 2024

A QUICK SUMMARY – FOR THE BUSY ONES

TABLE OF CONTENTS

Test-Driven Development (TDD) – Quick Guide [2024]

Introduction

This article on Test-Driven Development (TDD) will help you become comfortable with this development cycle and adapt it into your coding methods.

The concept of Test-Driven Development (TDD) was introduced in 2003 by Kent Beck. There is no formal definition but Beck gives approaches and examples of TDD. The goal of TDD is to “write clean code that works”.

In TDD, follow only one rule of thumb: Only change the production code if any test fails. Otherwise, only refactor to optimize the code. For updated requirements, convert them to test cases, add these tests, and only then write new code.

TDD is a very short development cycle, and repetitive. Customer requirements are turned into highly specific test cases and software is written and improved to pass the new tests.

Test-Driven Development is related to test-first programming concepts in extreme programming, advocating frequent software updates/releases in short development cycles and promoting extensive code reviews, unit testing, and incremental addition of features.

A closely related concept to TDD is Acceptance Test-Driven Development (ATDD), where the customer, developer, and tester all participate in the requirement analysis process. TDD is both for mobile and web app developers, whereas ATDD is a communication tool to ensure that requirements are well-defined.

Test-Driven Development (TDD) cycle

Test-Driven Development (TDD) cycle.

Let’s start with the basics and have a look at the TDD cycle, also know as Red-Green-Refactor process, step by step.

The Test-Driven Development cycle:

1. Add a test, which will certainly FAIL. (Red)

In TDD, every feature in a software is first added in terms of test cases. A test is created for a new or updated function. To write the tests, developers must understand the feature specifications and requirements.

This practice separates TDD from traditional software development methods where unit tests are written after writing source code. In this way, TDD makes the developer focus on the requirements before writing the code.

2. Run all the tests. See if any test fails.

Running tests validates that the test harness is working correctly and simultaneously proves that as new tests added are failing with the existing code, new code is required.

3. Write only enough code to pass all the tests. (Green)

The new code written in this stage may not be perfect and may pass the test in an irrelevant way. The only requirement in this stage is that all the tests should pass. One possible way to begin with adding the statements is to return a constant, and incrementally add logical blocks to build the function.

4. Run all the tests. If any test fails, go back to step 3. Otherwise, continue.

If all the tests pass, it can be said that the code meets the test requirements and does not degrade any existing features. If any test fails, the code must be edited to ensure that all the tests pass.

5. Refactor the code. (Refactor)

As the code base grows, it must be cleaned up and maintained regularly. How? There are a few ways:

  • New code that might have been added for convenience to pass a test can be moved to its logical place in the code.
  • Duplication must be eliminated.
  • Object definitions and names must be set to represent their purpose and usage.
  • As more features are added, functions become lengthy. It can prove beneficial to split and carefully named to improve readability and maintainability.
  • As all tests are re-run throughout the refactoring phase, the developer can be confident that the process does not alter any existing functionality.

6. If a new test is added, repeat from step 1.

Take small steps, targeting as few as 1 to 10 edits between each test run.

If the new code does not quickly satisfy a new test, or other unrelated tests fail unexpectedly, then undo/revert to a working code, instead of doing extensive debugging.

When using external libraries, it is important not to make increments that are so small that they merely test the library itself, unless it is to test whether the library is outdated/incompatible, buggy or not feature-complete.

Common practices in the TDD cycle

In this part I’ll give you a quick TDD common practices walkthrough, that will help you code better.

Small units

A unit is a class/module that is a group of closely related functions, often called a module. Keeping units small adds benefits such as easier testing and debugging.

Test structure

As you will always be running tests for units, it’s important to apply the following test structure:

  1. Setup: Getting the system or Unit Under Test (UUT) into the necessary state to run the tests, ensuring the preparedness of the system for testing.
  2. Execution: Run the test on the target and monitor all return values and outputs, ensuring that the path of execution is the one you’re targeting.
  3. Validation: Assert/Ensure that the results are correct. This is the point of declaring whether the test Passed/Failed.
  4. Cleanup: Restore the test-system to the original state. This permits another test to execute immediately.

Practices to avoid

  • Determinism – make sure the tests are deterministic. Dependency on events such as API calls or system date/time can cause tests to fail even without changes in code.
  • If possible, avoid requiring an order of execution for testing, and allow random execution of tests. Similarly, avoid having the tests depend on previous or other test results.
  • Testing precise execution behavior timing or performance.
  • Do not develop test cases that evaluate more than they should (“all-knowing oracles”).
  • Do not design tests that take significantly longer to execute.

Individual practices

  • Keep every test focused only on the results necessary to validate it.
  • In non-real time systems, develop time-related tests to enable tolerance for execution. Allowing a 5% to 10% margin for late execution to reduce the probability of false negatives during testing is a common practice.
  • Treat the test code the same as the production code. This improves code quality and robustness.
  • Split the tests into smaller tests wherever feasible.
  • As a team, review your tests and test practices to share effective techniques and catch bad habits.

Advanced practices

Acceptance Test Driven Development (ATDD) has advanced TDD practices, and the development team has the target of satisfying acceptance tests that are defined by the customer. The customer may have an automated mechanism to decide whether the software meets their requirements.

Test-Driven Development (TDD) – Advantages and disadvantages

Benefits

  • Writing tests in TDD forces you to think about use cases, and improves productivity.
  • Even considering that the amount of code based on writing unit tests, the total implementation will be shorter and less buggy (according to a model developed by Müller and Padberg, “About the Return on Investment of Test-Driven Development”).
  • Debugging becomes easier.
  • If common TDD practices are followed, the code developed is modularized, flexible, and extensible.
  • Automatic regression detection on every incremental update.
  • Automated tests are very thorough. As no more code is written than necessary to pass the failing tests, these automated tests tend to cover every code path.
  • Easier documentation, unit tests are self-documenting, easier to read and understand. You should always descriptively document the source/production code.

Limitations

  • TDD does not do well when functional tests are required, such as GUI design.
  • When developers themselves write the unit tests, the tests may share the same blind spots as the code.
  • At times, high numbers of passing tests can create a false sense of security, causing fewer testing activities during integration testing, potentially causing problems.
  • Tests become part of the maintenance overhead. Badly written tests can further cause more costs in maintenance or updating.
  • The level of detail achieved during TDD cannot be easily recreated later on.

Test-Driven Development in practice

For large systems

For large systems, testing is challenging and requires a modular architecture with well-defined components. Some key requirements that must be fulfilled are:

  • High Cohesion ensures each module provides a set of related capabilities, making the corresponding tests easier to maintain.
  • Low Coupling allows isolated testing of modules.

Scenario Modeling

In Scenario Modeling, a set of sequence charts is constructed, each chart focused on a single system-level execution scenario. It provides an excellent vehicle for creating strategies of interaction in response to an input.

Each Scenario Model serves as a set of requirements for the features that a component will provide. Scenario modeling can be helpful in constructing TDD tests in complex systems.

Code visibility and security

It is important to differentiate the code between testing and production. The unit test suite must be able to access the code to test. However, the design of criteria such as information hiding and encapsulation and separation of modules must not be compromised.

In object-oriented design, tests will still not be able to access private data members and methods and require extra coding. Alternatively, an inner class can be used within the source code to contain the unit tests. Such testing hacks should not remain in the production code. TDD practitioners often argue if private data should even be tested.

Test-Driven Development (TDD) step by step.

Conclusion

In this article, we got an overview of Test-Driven Development (TDD). We saw the benefits and limitations of TDD, and practices associated with TDD, covering ideas necessary to know in order to start adopting the TDD cycle.

Resources:

Frequently Asked Questions

No items found.

Our promise

Every year, Brainhub helps 750,000+ founders, leaders and software engineers make smart tech decisions. We earn that trust by openly sharing our insights based on practical software engineering experience.

Authors

Matt Warcholinski
github
Chief Growth Officer

A serial entrepreneur, passionate R&D engineer, with 15 years of experience in the tech industry. Shares his expert knowledge about tech, startups, business development, and market analysis.

Matt Warcholinski
github
Chief Growth Officer

A serial entrepreneur, passionate R&D engineer, with 15 years of experience in the tech industry. Shares his expert knowledge about tech, startups, business development, and market analysis.

Read next

No items found...