# TDD and anti-patterns - Chapter 3

04 Mar 2022

## Matheus Marabesi

See author's bio and posts

This post is a follow up on a series around TDD anti patterns. In the previous episode, we covered: the mockery, the inspector, the generous leftovers and the local hero, which are part of the 22 anti-patterns listed by James Carr.

In this blog post, we focus on four more of them, named: The nitpicker, The secret catcher, The dodger and The Loudmouth. Each one focuses on a specific aspect of code that makes testing harder. The four presented in this post are somehow connected to starters in TDD, most of them are patterns found by those who have not experienced writing applications guided by tests.

Besides that, according to the survey, we found that these are four of the least popular anti-patterns.

If you prefer to watch the workshop in which we talked about these anti-patterns, you can click on this link.

We continue our anti-patterns series of workshops with the next chapter coming on march 17th. Sign up here.

## The Secret Catcher

The survey previously revealed, the secret catcher is in the 7th position of the most popular tdd anti-patterns.

The secret catcher is an old friend of mine. I have seen it in different code bases, and often I compare this anti-pattern to the “hurry” in which features need to be released.

It’s called the ‘secret’, because you don't know that the code is supposed to throw an exception, instead of handling it (catch) the test case ignores it. The secret catcher is also related to The Greedy Catcher that hasn't been covered yet.

The following example written in vuejs with apollo client is an attempt to depict such a scenario, where the method seems to be fine and does what it is supposed to. In other words, it sends a mutation to the server to remove the payment type from the associated user and in the end it updates the user interfaceI to reflect that:

async removePaymentMethod(paymentMethodId: string) {
this.isSavingCard = true;

const { data } = await this.$apolloProvider.clients.defaultClient.mutate({ mutation: DetachPaymentMethodMutation, variables: { input: { stripePaymentMethodId: paymentMethodId } }, }); if (this.selectedCreditCard === paymentMethodId) { this.selectedCreditCard = null; } this.isSavingCard = false; } The javascript test is written using jest and the testing library, and here is a challenge for you, before going any further in the text, can you spot what is missing from the test case? test('it handles error when removing credit card ', async () => { const data = await Payment.asyncData(asyncDataContext); data.paymentMethod = PaymentMethod.CREDIT_CARD; const { getAllByText } = render(Payment, { mocks, data() { return { ...data }; }, }); const [removeButton] = getAllByText(Remove'); await fireEvent.click(removeButton); }); Let's start with the missing assertion at the end of the test case. If you haven't noticed, the last step that the test does is to wait for the click event, and that's it. For some feature that removes a payment method from a user, asserting that a message is shown would be a good idea. Besides that, the test case relies on a specific setup that the graphql mutation will always work, in other words, if the graphql mutation throws an exception in the production code, but there is no sign of it being handled. In this scenario, the test case relies on jest to report the error, if any. ## The Nitpicker The survey previously revealed, the nitpicker is in the 8th position of the most popular tdd anti-patterns. As the definition goes, the Nitpicker is noted in web applications where the need to assert the output is focused on an entire object rather than the specific property needed. This is common for JSON structures, as depicted in the first example. The following code asserts that an application has been deleted. In this context, an application is a regular entry in the database with the label “application”. Note that this example in PHP is used to assert the exact output from the HTTP request, nothing more, nothing less. public function testDeleteApplication() {$response = $this->postApplication();$this->assertFalse($response->error);$this->delete('api/application/' . $response->data) ->assertExactJson([ // <------------- is this needed? 'data' => (string)$response->data,
'error' => false
]);
}

In this sense, this test is fragile for a specific reason: if we add another property to the response, it will fail, complaining that the json has changed. For removing those properties, such failure would be helpful. On the other hand, adding a new property should not be the case.

The “fix” would be to replace the “exact” idea in this assertion to be less strict, such as the following:

public function testDeleteApplication()
{
$response =$this->postApplication();

$this->assertFalse($response->error);

$this->delete('api/application/' .$response->data)
->assertJson([ // <----------- changing this assertion
'data' => (string) $response->data, 'error' => false ]); } The change here asserts that the desired fragment is indeed in the output, no matter if there are other properties in the output, as long as the desired one is there. This simple change opens up the door to move away from the fragile test we had in the first place. Another way to not face the nitpicker is to look for the properties you want to. The following code is from an open source project that handles the s3 sining process to access a resource in amazon s3: describe('#getSignedCookies()', function() { it('should create cookies object', function(done) { var result = CloudfrontUtil.getSignedCookies( 'http://foo.com', defaultParams); expect(result).to.have.property('CloudFront-Policy'); expect(result).to.have.property('CloudFront-Signature'); expect(result).to.have.property('CloudFront-Key-Pair-Id'); done(); }); }); The code has three assertions to assert that it has the desired property instead of checking all at once regardless of the output. Another example of how to approach such assertions is the code extracted from an open source project that aims to collect and process the four key metrics: @Test fun should calculate CFR correctly by monthly and the time split works well ( cross a calendar month)() { val requestBody = """ { skipped code } """.trimIndent() RestAssured .given() .contentType(ContentType.JSON) .body(requestBody) .post("/api/pipeline/metrics") .then() .statusCode(200) .body("changeFailureRate.summary.value", equalTo(30.0F)) .body("changeFailureRate.summary.level", equalTo("MEDIUM")) .body("changeFailureRate.details[0].value", equalTo("NaN")) .body("changeFailureRate.details[1].value", equalTo("NaN")) .body("changeFailureRate.details[2].value", equalTo(30.0F)) } Once again, the RestAssured looks for property by property in the output instead of comparing everything in the output as depicted in the first example. Testing frameworks usually offer such utility to help developers to test. For example, in the first example, Laravel uses the syntax assertJson/assertExactJson. The second example uses chai to depict how to assert specific properties in an object. Last but not least, RestAssured is the one used to depict how to deal with the nitpicker in the kotlin ecosystem. ## The Dodger The survey previously revealed, the dodger is in the 8th position of the most popular tdd anti-patterns. The dodger, in my experience, is the most common anti-pattern when starting with testing first. Before diving into the code example, I would like to elaborate on why the dodger might appear. Writing code in a TDD manner implies writing the test first for any code. The rule is: start with a failing test. Make it pass, and then refactor. As simple as it gets, there are some specific moments while practising this flow that the question “what should I test” kicks in. As the rule goes, the common approach is to start writing tests by one class and one production class, meaning that it will have a 1-1 relationship, then, the following question is: “how small should be the step to write a test?”. As this “small” is context dependent, it’s not that clear to what is a minimum test code to move to the production code. Even though there are some videos and practitioners that make it a breeze. Those two questions, while starting practising TDD are common, and they might lead to the dodge, as it focused on testing specific implementation code rather than the desired behaviour, to depict that take the following production code: namespace Drupal\druki_author\Data; use Drupal\Component\Utility\UrlHelper; use Drupal\Core\Language\LanguageManager; use Drupal\Core\Locale\CountryManager; final class Author { public static function createFromArray(string$id, array $values): self {$instance = new self();
if (!\preg_match('/^[a-zA-Z0-9_-]{1,64}$/',$id)) {
throw new \InvalidArgumentException('Author ID contains not allowed characters, please fix it.');
}
$instance->id =$id;

if (!isset($values['name']) || !\is_array($values['name'])) {
throw new \InvalidArgumentException("The 'name' value is missing or incorrect.");
}
if (\array_diff(['given', 'family'], \array_keys($values['name']))) { throw new \InvalidArgumentException("Author name should contains 'given' and 'family' values."); }$instance->nameGiven = $values['name']['given'];$instance->nameFamily = $values['name']['family']; if (!isset($values['country'])) {
throw new \InvalidArgumentException("Missing required value 'country'.");
}
$country_list = \array_keys(CountryManager::getStandardList()); if (!\in_array($values['country'], $country_list)) { throw new \InvalidArgumentException('Country value is incorrect. It should be valid ISO 3166-1 alpha-2 value.'); }$instance->country = $values['country']; if (isset($values['org'])) {
if (!\is_array($values['org'])) { throw new \InvalidArgumentException('Organization value should be an array.'); } if (\array_diff(['name', 'unit'], \array_keys($values['org']))) {
throw new \InvalidArgumentException("Organization should contains 'name' and 'unit' values.");
}
$instance->orgName =$values['org']['name'];
$instance->orgUnit =$values['org']['unit'];
}

if (isset($values['homepage'])) { if (!UrlHelper::isValid($values['homepage']) || !UrlHelper::isExternal($values['homepage'])) { throw new \InvalidArgumentException('Homepage must be valid external URL.'); }$instance->homepage = $values['homepage']; } if (isset($values['description'])) {
if (!\is_array($values['description'])) { throw new \InvalidArgumentException('The description should be an array with descriptions keyed by a language code.'); }$allowed_languages = \array_keys(LanguageManager::getStandardLanguageList());
$provided_languages = \array_keys($values['description']);
if (\array_diff($provided_languages,$allowed_languages)) {
throw new \InvalidArgumentException('The descriptions should be keyed by a valid language code.');
}
foreach ($values['description'] as$langcode => $description) { if (!\is_string($description)) {
throw new \InvalidArgumentException('Description should be a string.');
}
$instance->description[$langcode] = $description; } } if (isset($values['image'])) {
if (!\file_exists($values['image'])) { throw new \InvalidArgumentException('The image URI is incorrect.'); }$instance->image = $values['image']; } if (isset($values['identification'])) {
if (isset($values['identification']['email'])) { if (!\is_array($values['identification']['email'])) {
throw new \InvalidArgumentException('Identification email should be an array.');
}
$instance->identification['email'] =$values['identification']['email'];
}
}

return $instance; } public function getId(): string { return$this->id;
}

public function getNameFamily(): string {
return $this->nameFamily; } public function getNameGiven(): string { return$this->nameGiven;
}

public function getCountry(): string {
return $this->country; } public function getOrgName(): ?string { return$this->orgName;
}

public function getOrgUnit(): ?string {
return $this->orgUnit; } public function getHomepage(): ?string { return$this->homepage;
}

public function getDescription(): array {
return $this->description; } public function getImage(): ?string { return$this->image;
}

public function checksum(): string {
return \md5(\serialize($this)); } public function getIdentification(?string$type = NULL): array {
if ($type) { if (!isset($this->identification[$type])) { return []; } return$this->identification[$type]; } return$this->identification;
}
}

The goal here is to validate this Author object, to be created, the given array should hold valid data and if not, an exception will be thrown. Then, the next up is the testing code:

public function testObject(): void {
$author = Author::createFromArray($this->getSampleId(), $this->getSampleValues());$this->assertEquals($this->getSampleId(),$author->getId());
$this->assertEquals($this->getSampleValues()['name']['given'], $author->getNameGiven());$this->assertEquals($this->getSampleValues()['name']['family'],$author->getNameFamily());
$this->assertEquals($this->getSampleValues()['country'], $author->getCountry());$this->assertEquals($this->getSampleValues()['org']['name'],$author->getOrgName());
$this->assertEquals($this->getSampleValues()['org']['unit'], $author->getOrgUnit());$this->assertEquals($this->getSampleValues()['homepage'],$author->getHomepage());
$this->assertEquals($this->getSampleValues()['description'], $author->getDescription());$this->assertEquals($this->getSampleValues()['image'],$author->getImage());
$this->assertEquals($this->getSampleValues()['identification'], $author->getIdentification());$this->assertEquals($this->getSampleValues()['identification']['email'],$author->getIdentification('email'));
$this->assertEquals([],$author->getIdentification('not exist'));
$this->assertEquals($author->checksum(), \$author->checksum());
}

The first thing that I noted when skimming through the code is that if I need to change how I get the author name (rename the method) it is required to change the test code, even though the desired behaviour hasn't changed - the validation is still required.

Another approach would be to rewrite the single test case, by multiple test cases, catching the desired exception if an undesired value is passed, then encapsulating it in a validator class to prevent the coupling from the test and production code.

## The Loudmouth

The survey previously revealed, the loudmouth is in the 8th position of the most popular tdd anti-patterns.

When developing, adding some traces to see if what the code is doing matches the developer's understanding is something common, we can call that debugging, often used when a developer needs to understand a piece of code.

TDD practitioners argue that once TDD is practised, no debugging tool is needed, be it a print statement or be it adding breakpoints into the code. Therefore, what happens if you don’t have that much experience with TDD?

Often the answer is a mix of both, debugging and using the tests to talk with you. For example, the following code depicts a test code that can handle an error if it happens to receive an invalid javascript code. Keep in mind that the code is used to parse javascript code and act upon its result:

test.each([['function']])(
'should not bubble up the error when a invalid source code is provided',
(code) => {
const strategy = jest.fn();

const result = Reason(code, strategy);
expect(strategy).toHaveBeenCalledTimes(0);
expect(result).toBeFalsy();
}
);

The check is straightforward. It checks if the desired strategy wasn’t called as the code is an invalid javascript code and checks if the result was the boolean false. Let’s see now what the implementation of this test looks like:

const reason = function(code, strategy) {
try {
const ast = esprima.parseScript(code);

if (ast.body.length > 0) {
return strategy(ast);
}
} catch (error) {
console.warn(error); // <------------------ this is loud
return false;
}
};

Ideally, working in a TDD fashion, the console.log used would be mocked from the start. It would require verification of when it was called and with which message. This first hint already points to an approach that is not test first. The following image depicts what the loudmouth causes, even though the tests are green, there is a warning message - did the test pass? Did the change break something?

Steve Freeman and Nat Pryce give an idea why logging (such as this console.log) should be treated as a feature instead of a random log used for whatever reason.

The following snippet depicts a possible implementation mocking out the console.log and preventing the message to be displayed during test execution:

const originalConsole = globalThis.console;

beforeEach(() => {
globalThis.console = {
warn: jest.fn(),
error: jest.fn(),
log: jest.fn()
};
});

afterEach(() => {
globalThis.console = originalConsole;
});

With the mocked console, now it’s possible to assert the usage of it instead of printing the output while running the tests, the version without the loudmouth would be the following:

const originalConsole = globalThis.console;

beforeEach(() => {
globalThis.console.warn = jest.fn()
});

afterEach(() => {
globalThis.console = originalConsole;
});

test.each([['function']])(
'should not bubble up the error when a invalid source code is provided',
(code) => {
const strategy = jest.fn();

const result = Reason(code, strategy);
expect(strategy).toHaveBeenCalledTimes(0);
expect(result).toBeFalsy();
expect(globalThis.console.warn).toHaveBeenCalled(); // < -- did it warn?
}
);

## Final considerations

In this post, we went through four more anti-patterns related to developers starting with TDD. Those covered are the least popular anti-patterns that came up in the survey, the secret catcher at 7th, the nitpicker, the dodge and the loudmouth at 8th.

We saw that the secret catcher is tricky, and it seems to be testing what it is supposed to test, but with close inspection we can spot the miss handling exception. The nitpicker on the other hand, points to fragile tests, in which the assertion used to test focuses on more things than it needs.

The dodge is the classic anti-pattern when starting with TDD. It leads to a relationship one-to-one between the test and the production code. Last but not least, the loudmouth can cause the developer to doubt if the test that is passing is green for the correct reason, as the output pollutes the output while the tests are being executed.

All in all, we reached a point in which we covered more than half of the 22 anti-patterns listed by James Carr; those four added to our toolkit more information while test driving applications.