TDD and anti-patterns - Chapter 5

23 May 2022

This is a follow up on a series of posts around TDD anti patterns. The first of this series covered: the liar, excessive setup, the giant and slow poke. These four anti patterns are part of 22 more, the anti patterns is a list formalised in James Carr post and also discussed in a stackoverflow thread.

In this blog post we will focus on four more of them: The Stranger, The operating system evangelist, Success against all odds and The free ride. Each one of them focuses on a specific aspect of code that makes testing harder, sometimes it comes from not doing TDD, sometimes it just comes from the lack of knowledge on designing code. Either way, this is an attempt to explore those flaws and avoid them in our own code base

(In the previous links you can see the videos of the sessions on demand. You just have to leave your email and you will be able to access the video of the session).

Takeaways

  • Chaining different object calls in the test code can be a smell related to the law of demeter
  • Tying up test code with the type of operating system brings portability issues
  • Avoid sliding assertions that require a test case for itself
  • Focus on having each test case with a single responsibility and not that many assertions

The stranger

In this blog post from java revisited the law of demeter explanation gives us a hint on why The strange is an anti-pattern, we can also relate to the clean code book "talk to friends, not to strangers". In their example, the method chain is the one that exposes more the stranger, the example, is being used in production code. Carlos Caballero in his blog post "Demeter’s Law: Don’t talk to strangers!" also uses production code to depict what is and when the demeter law is not followed, he gives a snippet that ideally would need to be tested, and here we are going to expand that and implement the test code.

To start with, here goes the code that depicts the violation in the production code:

person
    .getHouse() // return an House's object
    .getAddress() // return an Address's object
    .getZipCode() // return a ZipCode Object
 

Such code, could potentially lead to The strange in the test code, for example, to test if the person given has a valid zip code, we could potentially write something like the following:

describe('Person', () => {
    it('should have valid zip code', () => {
        const person = FakeObject.createAPerson({ zipCode: '56565656' });
        person
            .getHouse()
            .getAddress()
            .getZipCode()
        expect('56565656').toEqual(person.house.address.zipCode);
    });
});

 

Note that if we want to access the zip code, we need to go all the way down to the ZipCode object, potentially, this could show that what we want to test is the Address itself and not the person.

describe('Address', () => {
    it('should have valid zip code', () => {
        const address = new Address(
            '56565656',
            '1456',
            'Street X',
            'My city',
            'Great state',
            'The best country'
        );
        expect('56565656').toEqual(address.getZipCode());
    });
});
 

The test itself has something here that could be improved to avoid this almost one to one test class and production code, for example, the interaction between the person object, address and zip code could be "hidden" in an implementation and test the output of it, instead of going all the way down in the chain.

Before moving on to the next one, remember that The stranger could be also one of the anti-patterns that are related to test smells. There is some indication that you might be facing the stranger:

  • It depends on context
  • It is related to the xUnit pattern in the section "Test smells"
  • It can also be related to mocks

The operating system evangelist

The operating system evangelist is related to how coupled the testing code is to the operating system, the way of coupling can be on different aspects of the code, for example, using a specific path that exists only on windows.

To depict such a case, the snippet that follows was extracted from the open source project Lutris. Lutris, aims to run games that are for windows, on linux, the premise of the project already gives some constraints that are expected in the code base. The result, is the following test case, that launches a linux process:

class LutrisWrapperTestCase(unittest.TestCase):
    def test_excluded_initial_process(self):
        "Test that an excluded process that starts a monitored process works"
        env = os.environ.copy()
        env['PYTHONPATH'] = ':'.join(sys.path)
        # run the lutris-wrapper with a bash subshell. bash is "excluded"
        wrapper_proc = subprocess.Popen(
            [
                sys.executable, lutris_wrapper_bin, 'title', '0', '1', 'bash', 'bash',
                '-c',
                "echo Hello World; exec 1>&-; while sleep infinity; do true; done"
            ],
            stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, env=env,
        )
 

The test case relies heavily on bash to execute the test case and as a result it would fail if tried to execute on windows for example. That is, not to say that it is bad, this is a tread-off between the focus of the project and the cost of having an abstraction.

In the book “Cosmic python” (Chapter 3) the author shares the idea behind coupling and abstraction, in he uses the file path as an example. The example used can also be related to the strategy. design pattern.

The operating system evangelist also appeared in go lang, in an issue that was trying to mitigate the new line on linux and windows, actually, this issue is part of the definition of this anti-pattern "A good example would be a test case that uses the newline sequence for windows in an assertion, only to break when ran on Linux''. In that tread a user complains about the issues that she has to run the same tests on windows most of the errors are due the difference between the feed line code.


Another anti-pattern that is related to the Operating System Evangelist is The Local Hero. The Local Hero is known for having everything in place locally to run an application, but as soon as you try to run it on another machine, it will fail.

 

FLASH BACK - In the episode 2, besides the local hero we discussed the anti patterns the mockery, the inspector and the generous leftovers, the examples used are in javascript, kotlin and php

We already had a discussion about the local hero in the episode 2 of this series, but to reinforce on how they are connected, here goes an example from Jenkins source code:

@Test
public void testWithoutSOptionAndWithoutJENKINS_URL() throws Exception {
    Assume.assumeThat(System.getenv("JENKINS_URL"), is(nullValue()));
    // TODO instead remove it from the process env?
    assertNotEquals(0, launch("java",
            "-Duser.home=" + home,
            "-jar", jar.getAbsolutePath(),
            "who-am-i")
    );
}


This snippet is particularly interesting, because whoever wrote it, already noticed that there was some smell going on with the comment: TODO instead remove it from the process env?

Last but not least, katas are usually the ones to catch those kinds of patterns early on, and push for an abstraction during the refactor phase. The WordWrap is an example of kata that aims to break into new lines if the content is greater than the expected. For an explanation on the differences in feed lines and operating systems, check Baeldung.com post.

Success against all odds

Throughout this series, we have seen different anti-patterns that arise from the lack of practising the test first approach, such behaviour leads to different things that makes testing difficult, for example, the excessive setup and the giant, that are related to the god object.

Here, once again we will go over an anti-pattern that is related to the lack of a test first approach, but instead, the developer follows the test first and instead of failing first, it just makes the test pass from the start. When this is the case, the Success against all odds is revealed. As the practice of starting from the test passing from the start leads to the test passing even when the failure is expected.

In order to depict such a scenario, the following snippet is an attempt to implement a repository from SpringBoot which will paginate and query based on a given string.

@Repository
class ProductsRepositoryWithPostgres(
    private val entityManager: EntityManager
) : Repository {
 
    override fun listFilteredProducts(query: String?, input: PagingQueryInput?) {
        val pageRequest: PageRequest = input.asPageRequest()
        val page: Page<Product> = if (query.isNullOrBlank()) {
            entityManager.findAll(pageRequest)
        } else {
            entityManager.findAllByName(query, pageRequest)
        }
        return page
    }
}

Once we have a look at the given code that actually performs the access to the database and apply the criteria, the following test code is the one used to test the repository.

Note that since the beginning we are doing some heavy lift operations here to populate the database with different data. Which could be potentially a smell

 

Note: For the sake of the example, the teardown has been removed in order to keep it simple. The tear down removes all the data inserted in the database used during the test.

private fun setupBeforeAll() {
    productIds = (1..100).map { db().productWithDependencies().apply().get<ProductId>() }
    productIdsContainingWood.addAll(
        (1..3).map { insertProductWithName("WoodyWoodOrange " + faker.funnyName().name()) }
    )
    productIdsContainingWood.addAll(
        (1..3).map {
            insertProductWithName(
                faker.funnyName().name() + " WoodyWoodOrange " + faker.funnyName().name()
            )
        }
    )
 

With the setup in place, let’s have a look at the first test case in this class. The test case goal is to test that given a sort parameter, the parameter CREATED_AT_ASC (comment number 1) is the one we are looking for, once this has been given, the data should be ordered accordingly.

@Test
fun `list products sorted by creation at date ascending`() {
    val pageQueryInput = PagingQueryInput(
        size = 30, page = 0, sort = listOf(Sort.CREATED_AT_ASC) // 1
    )
    val result = repository.listFilteredProducts("", pageQueryInput) // 2
 
    assertThat(result.currentPage).isEqualTo(0) // 3
    assertThat(result.totalPages).isEqualTo(4) // 4
    assertThat(result.totalElements).isEqualTo(112) // 5
 
    assertThat(result.content.size).isEqualTo(30) // 6
    assertThat(result.content).allSatisfy { productIds.subList(0, 29).contains(it.id) } // 7
}
 

Let’s dive a bit on what is going on in the code guided by the comments in there:

  • The parameter that we send to the repository with the order we want and pagination
  • The execution of the code we want to test
  • We verify that the page returned from the repository is the first one
  • We verify that there are 4 pages in total
  • We verify that there are 112 in total
  • We verify that the list of items returned is the same as the one asked in the pagination
  • We verify that the list returned is in the same as in the list created in the before all setup

The next text case depicts a variant on what we might want to test, which is the reverse order. Instead of ascendent we now will test descendent. Note that the majority of the asserts are the same as the previous test case.

@Test
fun `list products sorted by creation at date ascending`() {
    val pageQueryInput = PagingQueryInput(
        size = 30, page = 0, sort = listOf(Sort.CREATED_AT_ASC) // 1
    )
    val result = repository.listFilteredProducts("", pageQueryInput) // 2
 
    assertThat(result.currentPage).isEqualTo(0) // 3
    assertThat(result.totalPages).isEqualTo(4) // 4
    assertThat(result.totalElements).isEqualTo(112) // 5
 
    assertThat(result.content.size).isEqualTo(30) // 6
    assertThat(result.content).allSatisfy { productIds.subList(0, 29).contains(it.id) } // 7
}
 

Let’s avoid repeating the previous bullet list and focus on the items that are important.

The first item that is important, is the number of assertions we might not need for each test case, for example from the item 3 through 6, there are assertions that verifies the pagination and the numbers related to the list, reading the test name, our goal is to test the sorting first and not the pagination. In other words, we could have used just the last assertion.

Talking about item 7, let’s dive into it a bit more, because having such an assertion is one of the possible reasons to face success against all odds, and actually in the test code it is one of them, as it asserts on a subset of the list that will always be true.

In the xunit patterns book, a way to avoid such false/positive behaviour the way to go is to have the code as simple as possible, no logic in it, this is called robust test [1].

Refactoring success against all odds

The question here is, what could we do then to avoid such a thing? The proposed alternative for this test case and source code are related to splitting responsibilities in the test case, we could focus on sorting and later on in the pagination.

The first example here would be ordering the list in ascending order, it is worth mentioning that with this approach, we could potentially remove the big setup that was shown previously in the hook setupBeforeAll. For this approach, we instead, set up the data that is required for the test inside it.

@Test
fun `list products sorted by ascending creation date`() {
    db().productWithDependencies("created_at" to "2022-04-03T00:00:00.00Z").apply() // 1
    db().productWithDependencies("created_at" to "2022-04-02T00:00:00.00Z").apply() // 2
    db().productWithDependencies("created_at" to "2022-04-01T00:00:00.00Z").apply() // 3
 
    val pageQueryInput = PagingQueryInput(sort = listOf(SortOrder.CREATED_AT_ASC))
 
    val result = repository.listFilteredProducts("", pageQueryInput)
 
    assertThat(result.content[0].createdAt).isEqualTo("2022-04-01T00:00:00.00Z")
    assertThat(result.content[1].createdAt).isEqualTo("2022-04-02T00:00:00.00Z")
    assertThat(result.content[2].createdAt).isEqualTo("2022-04-03T00:00:00.00Z")
}
 

Once that is in place, we then move to the descending order test case, which is the same, but the assertion and setup changed:

@Test
fun `list products sorted by creation at date descending`() {
    db().productWithDependencies("created_at" to "2022-04-01T00:00:00.00Z").apply()
    db().productWithDependencies("created_at" to "2022-04-02T00:00:00.00Z").apply()
    db().productWithDependencies("created_at" to "2022-04-03T00:00:00.00Z").apply()
 
    val pageQueryInput = PagingQueryInput(sort = listOf(SortOrder.CREATED_AT_DESC))
 
    val result = repository.listFilteredProducts("", pageQueryInput)
 
    assertThat(result.content[0].createdAt).isEqualTo("2022-04-03T00:00:00.00Z")
    assertThat(result.content[1].createdAt).isEqualTo("2022-04-02T00:00:00.00Z")
    assertThat(result.content[2].createdAt).isEqualTo("2022-04-01T00:00:00.00Z")
}
 

Next up, is the pagination, now we can start to focus on the pagination and the aspects it brings.

Once we have the sorting in place, we can start to have a look at the pagination, and of course, try to test a specific thing at time. The following example depicts how we could assert that we got the desired number of pages when paging the result.

@Test
fun `should have one page when the list is ten`() {
    insertTenProducts()
    val page = PagingQueryInput(size = 10)
 
    val result = repository.listFilteredProducts(
        null,
        null,
        Page
    )
 
    assertThat(result.totalPages).isEqualTo(1)
}


The approach to decompose the tests in smaller “units” would help the communication between the team members that will be dealing with this code later on as well, and also the already mentioned robust tests.

The free ride

The free ride is one of the least popular anti-patterns that was found in the survey, maybe this is because the name is not that welcoming when the matter is to recall the meaning.

The free ride appears in test cases that usually require a new test case to test the desired behaviour, but instead, another assertion is put in place and sometimes even logic inside the test case is used for that end.

Let’s have a look at the following example that was extracted from the puppeteer project:

it('Page.Events.RequestFailed', async () => {
  const { page, server, isChrome } = getTestState();
 
  await page.setRequestInterception(true);
  page.on('request', (request) => {
    if (request.url().endsWith('css')) request.abort();
    else request.continue();
  });
  const failedRequests = [];
  page.on('requestfailed', (request) => failedRequests.push(request));
  await page.goto(server.PREFIX + '/one-style.html');
  expect(failedRequests.length).toBe(1);
  expect(failedRequests[0].url()).toContain('one-style.css');
  expect(failedRequests[0].response()).toBe(null);
  expect(failedRequests[0].resourceType()).toBe('stylesheet');
 
  if (isChrome)
    expect(failedRequests[0].failure().errorText).toBe('net::ERR_FAILED');
  else
    expect(failedRequests[0].failure().errorText).toBe('NS_ERROR_FAILURE');
  expect(failedRequests[0].frame()).toBeTruthy();
});

 

As already spoiled, the free ride is right on the if/else statement. There are two test cases in this single test, but probably, the idea was to reuse the same setup code and slide in an assertion in the same test case.

Another approach would be to split the test case in order to focus on a single scenario at time. Puppeteer itself already mitigated this issue using a function to handle such scenario, using that to split the test cases, we would have the first test case focuses on the chrome browser:

itChromeOnly('Page.Events.RequestFailed', async () => {
  const { page, server } = getTestState();
 
  await page.setRequestInterception(true);
  page.on('request', (request) => {
    if (request.url().endsWith('css')) request.abort();
    else request.continue();
  });
  const failedRequests = [];
  page.on('requestfailed', (request) => failedRequests.push(request));
  await page.goto(server.PREFIX + '/one-style.html');
  expect(failedRequests.length).toBe(1);
  expect(failedRequests[0].url()).toContain('one-style.css');
  expect(failedRequests[0].response()).toBe(null);
  expect(failedRequests[0].resourceType()).toBe('stylesheet');
  expect(failedRequests[0].failure().errorText).toBe('net::ERR_FAILED');
  expect(failedRequests[0].frame()).toBeTruthy();
});
 

And then, the second case for firefox.

itFirefoxOnly('Page.Events.RequestFailed', async () => {
  const { page, server } = getTestState();
  await page.setRequestInterception(true);
  page.on('request', (request) => {
    if (request.url().endsWith('css')) request.abort();
    else request.continue();
  });
  const failedRequests = [];
  page.on('requestfailed', (request) => failedRequests.push(request));
  await page.goto(server.PREFIX + '/one-style.html');
  expect(failedRequests.length).toBe(1);
  expect(failedRequests[0].url()).toContain('one-style.css');
  expect(failedRequests[0].response()).toBe(null);
  expect(failedRequests[0].resourceType()).toBe('stylesheet');
  expect(failedRequests[0].failure().errorText).toBe('NS_ERROR_FAILURE');
  expect(failedRequests[0].frame()).toBeTruthy();
});


Logic inside the test case is already an indication that the free ride is playing a role. The puppeteer example can be improved even further. Now that we split the logic into two test cases, there is more duplicated code (that could be an argument in favour of adopting the free ride). If that is the case, the testing framework can help us here.

To avoid code duplication in this scenario, we could use the hook beforeEach and move the required setup there.

Moving a bit from puppeteer, there are other ways in which the free ride can appears, the following code from the jenkins project:

public class ToolLocationTest {
    @Rule
    public JenkinsRule j = new JenkinsRule();
 
    @Test
    public void toolCompatibility() {
        Maven.MavenInstallation[] maven = j.jenkins.getDescriptorByType(Maven.DescriptorImpl.class).getInstallations();
        assertEquals(1, maven.length);
        assertEquals("bar", maven[0].getHome());
        assertEquals("Maven 1", maven[0].getName());
 
        Ant.AntInstallation[] ant = j.jenkins.getDescriptorByType(Ant.DescriptorImpl.class).getInstallations();
        assertEquals(1, ant.length);
        assertEquals("foo", ant[0].getHome());
        assertEquals("Ant 1", ant[0].getName());
 
        JDK[] jdk = j.jenkins.getDescriptorByType(JDK.DescriptorImpl.class).getInstallations();
        assertEquals(Arrays.asList(jdk), j.jenkins.getJDKs());
        assertEquals(2, jdk.length); // JenkinsRule adds a 'default' JDK
        assertEquals("default", jdk[1].getName()); // make sure it's really that we're seeing
        assertEquals("FOOBAR", jdk[0].getHome());
        assertEquals("FOOBAR", jdk[0].getJavaHome());
        assertEquals("1.6", jdk[0].getName());
    }
}

Another approach to avoid the free ride in this case, would be once again to split the test cases:

public class ToolLocationTest {
    @Test
    @LocalData
    public void shouldBeCompatibleWithMaven() {
        Maven.MavenInstallation[] maven = j.jenkins.getDescriptorByType(Maven.DescriptorImpl.class).getInstallations();
        assertEquals(1, maven.length);
        assertEquals("bar", maven[0].getHome());
        assertEquals("Maven 1", maven[0].getName());
    }
 
    @Test
    @LocalData
    public void shouldBeCompatibleWithAnt() {
        Ant.AntInstallation[] ant = j.jenkins.getDescriptorByType(Ant.DescriptorImpl.class).getInstallations();
        assertEquals(1, ant.length);
        assertEquals("foo", ant[0].getHome());
        assertEquals("Ant 1", ant[0].getName());
    }
 
    @Test
    @LocalData
    public void shouldBeCompatibleWithJdk() {
        JDK[] jdk = j.jenkins.getDescriptorByType(JDK.DescriptorImpl.class).getInstallations();
        assertEquals(Arrays.asList(jdk), j.jenkins.getJDKs());
        assertEquals(2, jdk.length); // JenkinsRule adds a 'default' JDK
        assertEquals("default", jdk[1].getName()); // make sure it's really that we're seeing
        assertEquals("FOOBAR", jdk[0].getHome());
        assertEquals("FOOBAR", jdk[0].getJavaHome());
        assertEquals("1.6", jdk[0].getName());
    }
}
 
The split would even help the mitigation if something fails in the test case.

Wrapping up

We are almost reaching the end of the testing anti-patterns journey, and as such you might feel that testing is not just something that helps to increase the confidence in changing the code or something to be used as a way of regression. Testing can also be used to break down functionality and improve the feedback loop.

It might be a feeling (also known as a smell) or something already shared with the software community as The Strange, but if you see something that needs improvement, it probably does.

It is also important to keep (when possible) an abstraction between the “difficult parts” in the code such as the type of the operating system, or the file path to save data, we can refer to the cosmic python to dive more into the theme of Coupling and Abstraction. Of course, we need to test them as well, but we could benefit from different types of testing for that end.

Last but not least, we saw that assertions are also a subject of debate, we notice that sometimes we use assertions that are not the goal to test a given piece of code and it can happen that we just slide in an assertion instead of creating a new test case.

All in all, the testing anti-patterns are context bounded, which having some of those in a code base could be known by the team and adopted as a tread-off.

Regardless of the why’s you might face them in your own code base, we share here four more anti-patterns that could be avoided in a hope to increase the feedback loop and decrease the pain perceived by developers when starting with the test first approach.

As always we hope you enjoyed this new episode and we look forward to seeing you in our last session in which we will talk about two more anti-patterns: The Jumper and The Flash.

Stay tuned and enjoy testing!