Unit‑Test Code Is Still Code — Apply the Same Standards

Refactor your test setup to reduce duplication and make test intent obvious.
Baseline + override keeps tests readable, maintainable, and developer-friendly.

Arrange ‑ Act ‑ Assert (AAA)

AAA is the mental checklist for every unit test:

  • Arrange – prepare the Subject Under Test (SUT) and its dependencies
  • Act – call the method you want to test
  • Assert – verify the outcome

In most well‑written tests the Act step is a single line. Assert varies (the “one‑assertion” discussion is for another day). The biggest surface area is usually Arrange: a few lines for simple cases, but dozens when mocks need precise behaviour.

We apply DRY (Don’t Repeat Yourself) in production code, yet test setup is often copied verbatim “because tests must be independent”. There is a middle ground.


Why This Matters

Copy‑paste tests rot quickly. When a constructor changes or a new dependency is added, you fix the same problem in a dozen places and still miss one. Treat test code like production code and the suite becomes an asset instead of overhead.


The examples below use simplified C# with xUnit, Moq and FluentAssertions for illustration purposes, but the same ideas apply to TypeScript + Jasmine, Python + pytest, Java + JUnit, and most other mainstream test frameworks.

Problem Example — Duplication Hides Intent

This example shows a unit test with only three setup lines. This pattern is common, but in real tests, dependency setup is often more complex with many chained setups.

C#
public class UnitTest1
{
    [Fact]
    public void Returns_2_when_Dependency1_returns_2()
    {
        // Arrange
        var dependency1 = new Mock<IDependency1>();
        var dependency2 = new Dependency2();
        dependency1.Setup(x => x.Call(1)).Returns(2);

        // Act
        var result = new MyTestClass(dependency1.Object, dependency2).Execute();
        
        // Assert
        result.Should().Be(2);
    }

    [Fact]
    public void Returns_3_when_Dependency1_returns_3()
    {
        // Arrange
        var dependency1 = new Mock<IDependency1>();
        var dependency2 = new Dependency2();
        dependency1.Setup(x => x.Call(1)).Returns(3);

        // Act
        var result = new MyTestClass(dependency1.Object, dependency2).Execute();

        // Assert
        result.Should().Be(3);
    }

    [Fact]
    public void Returns_minus4_with_alternative_strategy_and_2()
    {
        // Arrange
        var dependency1 = new Mock<IDependency1>();
        var dependency2 = new Dependency2Alt();
        dependency1.Setup(x => x.Call(1)).Returns(2);

        // Act
        var result = new MyTestClass(dependency1.Object, dependency2).Execute();

        // Assert
        sut.Should().Be(-4);
    }

    [Fact]
    public void Returns_minus6_with_alternative_strategy_and_3()
    {
        // Arrange
        var dependency1 = new Mock<IDependency1>();
        var dependency2 = new Dependency2Alt();
        dependency1.Setup(x => x.Call(1)).Returns(3);

        // Act
        var result = new MyTestClass(dependency1.Object, dependency2).Execute();

        // Assert
        result.Should().Be(-6);
    }
}
Expand

The signal (what is different) is buried in the noise (what repeats). Imagine four dependencies instead of two—will you spot the tiny delta?


Refactored Example — Baseline + Targeted Overrides

In this example the setup has been refactored to use a shared fixture, kept in the constructor and a Run method.

C#
public class UnitTest1
{
    // Shared fixture
    private readonly Mock<IDependency1> dependency1;
    private IDependency2               dependency2;

    public UnitTest1()
    {
        dependency1 = new Mock<IDependency1>();
        dependency1.Setup(x => x.Call(1)).Returns(2);
        dependency2 = new Dependency2();
    }

    private int Run() => new MyTestClass(dependency1.Object, dependency2).Execute();

    [Fact]
    public void Defaults_are_valid() 
    {
        // Act
        var result = Run();
        
        // Assert
        result.Should().Be(2);
    }

    [Fact]
    public void Returns_3_when_Dependency1_changes()
    {
        // Arrange
        dependency1.Setup(x => x.Call(1)).Returns(3);
        
        // Act
        var result = Run();
        
        // Assert
        result.Should().Be(3);
    }

    [Fact]
    public void Returns_minus4_with_alternative_strategy()
    {
        // Arrange
        dependency2 = new Dependency2Alt();

        // Act
        var result = Run();
        
        // Assert
        result.Should().Be(-4);
    }

    [Fact]
    public void Returns_minus6_with_alternative_strategy_and_3()
    {
        // Arrange
        dependency2 = new Dependency2Alt();
        dependency1.Setup(x => x.Call(1)).Returns(3);

        // Act
        var result = Run();
        
        // Assert
        result.Should().Be(-6);
    }
}
Expand

Each test highlights only the variation. A constructor change is fixed once; a new default mock behaviour is set once.

This approach allows for many practical variations:

  • Place the fixture directly in the test class or extract it.
  • Combine multiple overrides in a helper method with descriptive naming.
  • Use parameterized tests, data generators, or shared context classes when appropriate.

Trade‑Offs to Keep in Mind

Using a shared baseline with test-specific overrides simplifies maintenance and makes differences easier to spot—but like all patterns, it comes with trade-offs.

✅ Benefits

  • Faster fixes: When constructor parameters or default behaviors change, you update the fixture once instead of hunting through every test.
  • Clearer deltas: You immediately see what a test changes—greatly reducing cognitive load and the risk of subtle mistakes.
    • Keep test bodies minimal and highlight only the relevant variation.
    • If variation exceeds 2–3 lines across several tests, move to their own class or use named setup methods.
  • Better developer flow: The mental load of parsing long, repetitive arrange blocks drops significantly—helpful in large codebases or during reviews.

⚠️ Considerations

  • Slightly steeper reading curve: The full context of the test is split across the fixture and test body.
    Readers must mentally combine the base setup and override. Mitigate this by testing the default explicitly.
  • Shared setup must be predictable: Shared defaults should remain stable. Hidden or drifting defaults cause fragile tests.
    • Avoid logic or branching in fixtures—prefer multiple clearly named helpers
  • Temptation to over-generalize: Adding too many conditionals or abstracting away key details into helpers can make the setup harder to understand than repetition would have been.

🚫 What to Avoid

  • Don’t over-abstract. If the fixture or its helpers require their own unit tests, you’ve gone too far.
  • Avoid conditional setup code, hidden fixture dependencies, or setup paths that vary.
  • Don’t compromise test independence. Even with shared setup, each test execution must stand alone—reliable, predictable, and isolated.

Final Thought

A disciplined test suite is cheap to maintain and hard to break. Apply the same design principles you expect in production code—just in smaller, sharper strokes.

Patrick Verbeeten • Software Architect / Lead Developer
Helping teams ship reliable .NET systems • Azure • Code Quality

Leave a Reply

Your email address will not be published. Required fields are marked *