Testing anti-patterns

Cada vez son más los desarrolladores que implementan el testing como parte de sus prácticas habituales, contribuyendo de ese modo a mejorar la calidad de su software. Además el continuous testing es una práctica Agile que mejora el ciclo de vida del desarrollo de software.

En el último episodio de nuestro podcast Codurance Talks nos vamos a centrar en los antipatrones TDD, una lista inspirada en James Carr.

El testing en software es un desafío, dados los problemas que a veces conlleva en la percepción que los desarrolladores tienen de ello, porque tal vez les resulta una tarea aburrida o, incluso a veces, se reduce a que hay unos deadlines muy apretados que cumplir. Además de eso, cuando los equipos de profesionales deciden seguir lo que se denomina best practices, la falta de habilidades también puede ser una barrera durante el proceso de testing. TDD es una metodología utilizada en toda la industria como una forma de entregar software de calidad, por lo tanto, la falta de habilidades puede conducir a testings no deseados, sacando a la luz algunos anti patrones.

TDD testing anti-patterns, en comparación con la metodología TDD en sí, no se explora tanto como podría hacerse. 

Para hablar de todo ello en esta ocasión contamos con unos invitados muy especiales: Mauricio Aniche, Olena Borzenko, Francisco Climent y Matheus Marabesi. Todos ellos son unos apasionados del tema y compartirán sus opiniones y experiencias desde diferentes perspectivas.

 

Fran Avila

Yo, I hope you are all great. I'm really pleased that you are joining us for another episode of Codurance talk, we were talking about anything related to the IT industry, craftsperson, and the people who build it. I'm Fran Avila, software craftsperson at Codurance Spain and I will be your host for today. Today we are going to talk about TDD and testing anti patterns, testing practices have increased its adoption by developers shifting left the test responsibilities and increasing the quality of the code. Beside that continuous testing in Agile practice that impact the software development lifecycle, we are going to dive too deep into what they are and how to avoid them to keep your tests youth sharp.

 

To talk about the wide world and how we have today, some incredible people joining us to the table. Mauricio Aniche, he is Tech Academy lead at adding an assistant professor at Delft University of Technology. So if I mispronounce some of the Dutch names here, and he has a bass sparing in the academic world, and his publishing his third book, this one will be published with money, what will be the book about?

Mauricio Aniche: 

Hi, friend. Hi, everyone. So I'm super excited to be here. And indeed, I have a new book that will be published by manning the title is effective software testing. And the book is all about you how you can design good tests. And by good tests, I mean tests that actually find bugs. So if you're curious, just wait a little bit. And the book is almost there.

 

FA: 

Cool. Thank you for being here, I'm sure that we have a lot to add to the conversation. So moving on from Francisco Climent, R&D software engineer at Kongsberg Maritime, and a subcontractor training consultant, he has developed a big part of his career with how our firmware and embedded system, which is a different approach to what we used to do, how different is testing on embedded system compared with a regular internet web app system?

 

Francisco Climent: 

Well, think of it first, thank you all, I will say that the main difference is that all dependencies are the hardware is something that is physical that you can touch. And you can wait. Right? I think this is the only and the only difference. Nothing else. But you know, we have a lot of things to overcome, that are different in the embedded world, more or less.

 

FA: 

Cool, thank you, I am pretty sure that your different opportunities, it will be add a lot of value to that. So moving on, Olena Borzenco, full stack developer at Addeco Group, she's extremely experienced developer, but also community is star and you can see any of her multiple toes, talks and videos to the conference on the JetBrains. TV, etc. Linking to one of your talk. What will be the main misconception you have found in your career about TDD?

 

Olena Borzenco: 

I'd say two I really really saw them many times 100% code coverage. And one of my favourites is when we write all the tests at once, and then the logic that's like basically the only thing you need to to suffer and hate testing. There's like two things. So yeah, the most common I saw in my career.

 

FA: 

Thank you for being here. I'm pretty sure that these misconceptions will lead to some of the antipatterns that we will have in the in the table today. So finally, we have with us Matthew Marabesi, software craftsperson at Codurance Spain and experiencing development TD and this automation, I strongly recommend you to watch his talks all over the internet, youtube and twitch everything. So hi Matheus.

 

Matheus Marabesi: 

Hello from So hello, everyone. Really a pleasure to be with all of you here. It's kind of weird as well, because I'm used to listening to the podcast myself. And now I'm in the podcast with awesome people. So thanks for having me. And yeah, that's interesting. The last thing that Olena said, Yeah, let's talk about this. A bunch of things, which are subjects that can bring a lot of conversation. Why not?

 

FA:

Vamoooooooooooos! :)  Okay, so I'm very happy to have you here. So let's crack it. I would like to start with what we call an anti pattern on TB testing?

 

MM: 

So, let me let me start with this one. And then I think I can open at least from from the last conversations that I had, and also the videos that I used to use I watch, I see the anti patterns mean, something that in this context that the testing right to right to to have the test is something that stops you, or makes the activity of writing tests more difficult or gets into your way when you want to develop something. So for example, one video that reminds me is one from Dave Farley that he talks about those antipatterns. And the video that he, in the video at the video is titled, When TDD goes wrong. And he goes along this this length, of course, so I think this is something that we can take into account, at least for me, is a way to, to start with. 

 

OB:

Yeah! I agree. Actually. It's, like know, when it's more harmful than it's actually helping. And I guess they even look at the names of antipatterns. Like even what's popular, you already can see there. It's like nothing good is playing there. I do have a couple of examples of this handful of things it's lion, lion liar pattern, for example, many times, there was a situation when I was recording the test. And I wanted to know, what is this about? What is the code doing? And usually our test is documentation, right? So we kind of can read something out of it. I read great. I understand Harrison, when it's working. I'm just making changes and running my tests and it's just completely different. Situation is doesn't work as expected at all. And it does, it's not coming like that faster, you there is something wrong going wrong to try and try and try and try and hours hours and only after you realise. Okay, something went wrong. So, yeah, I think maybe you also had a couple of examples.

 

FA:

What do you think is the main reason for the lair? antipattern. Why is produce?,

 

OB:

I think first of all, we all have I mean, not all but a problem with naming. And everyone has different context, someone more experienced in the industry, someone know, like, this kind of confusion could be also laziness, like when we refactor, move things around change something will not always remember to change the name and the kind of things and sometimes not sometimes many times actually heard the opinion that names are not important. Like, could be 12345. Fine. Good enough. Yeah, I disagree with this, and I'm a big fan of beautiful and nice namings.

 

MA:

Yeah, for me, I think when you also have a test that doesn't really test what you want, this can be caused by the complexity of what you're trying to test. So if you have something super complex, you're not really designed to be tested, you're gonna do what you can, which means maybe, you know, your assertions will be super weak, maybe you're just going to call, you know, this huge method that runs a job that goes to the cache into the database, and then all you can do is to see if it crashes or not right? And then, yeah, your code starts to change, and then your tests are suddenly not really testing what you really wanted it to test. Yeah, so that's how I kind of see, you know, the reason for those tests to happen. Yeah,

 

FC:

I agree with you, in my experience, mostly comes from this later approaches, when you go for this first you have on these you should have an early warning that something is going wrong, right. But when you go to test later, you have nothing that all I don't you are facing to Okay, what do I what do I have to do in order to test that code and you are more focused on how to do it and instead of asserting what you want, so, one of the very first things that I tend to do is assert first. So even if you are doing this later, you go to this assertion first because you are going to be lost soon. Also, I see. A lot of people are saying that the chasing to discover it is one of the reasons for….. 

I will say that I have a strong opinion about that. I'm not let's say I'm pragmatic, but I have a strong opinion and I think that is not the occasion to discover it is another thing. I think that we have a lack of professionalism or cultural problems behind that is not JC for this coverage.

 

OB:

Also, I think it could be not that much about the coverage itself. But for example, deadlines or you know, like numbers on the project like we need graphics and then when it coverage right now and developer just didn't like copy, paste, copy, paste, doesn't matter. What's the name? It's reality. It's also happening and it's happening quite Open. I mean,

 

FA

yeah, I am all of you that sometimes discover that it has become a vanity metric. And there's something that add no value. And everyone is just, as you say, just putting, putting tests there, that there's not testing anything. And also, one thing that came to my mind for something that Mauricio says, is a pretty interesting that is when you have a complex system, and you're trying to increase coverage and everything, and this complex system that you need to add to go to the cache, so then you start to use mocks. So what do you think about mocks…. is it an anti pattern to use mocks if they are not?

 

OB

I do like this test when everything is just mocked. And in basically just testing how mocks working. I saw that quite quite. Like, quite often. And unfortunately, I don't think there is like anti anti Putin completely. Sometimes you do need to want something. But when you have, like, everything in moks, this is like, warning, like but red card, you know,

 

MB

I mean, take him every show has something to do right here. But using mocks, I think it's not anti pattern. But I think asserting trying to just asserting on them just for the sake of asserting I think that can lead to another another extent for that. But I think that sometimes mocks we need mocks right? So sometimes to trick the system behaviour, even when they are complex, I think they they are needed, but also on the coverage. I think that we see this as indeed pattern. But sometimes you can use that in your favour. Right? So I don't think I don't know, do you think that we have something that the coverage can help us to to follow up on or find bugs or something like that?

 

MA:

Regarding mocks, I think it also depends on how you like to approach software development, right? So I like mocks. And if I'm you know coding a class, and then I need some information that will come from a database, he would I prefer to die then to you know, go to the database to do this test. And for me, it's just natural to the markets. But of course, it is really challenging. When you start to mock things, you really have to decide what to mock, what not to mock when to go for a bigger test, you know, you want to exercise to three classes together instead of mocking things around and think it's just so hard to make this decision in practice. I do see a lot of value in mocking.

 

FA:

So based on what you say you're talking about mocking, like at the unit testing. But would you continue to mock him at integrating some parts of the system in the moment that you're integrating different components, you build remote, the hall mocks?

 

MA:

I think it really depends. So if I have a lot of control over my infrastructure, let's say the database belongs to me, I would just use my database into my integration testing. But for example, if I'm integrating with something I don't control, let's say, a bank, maybe I will mock the bank, even in bigger tests. I think it really depends on…. Yeah, yeah…. how can you write good tests, tests that you can trust that will, you know, run every time and the results will be the same? Can you control the other side of the thing? Because you need control? Right? Because you expect, you know, the bank to return always the same thing for your tests. So I think all those things you should take into account when deciding to mock at higher levels.

 

MB:

The end is the if the test, right? So I mean, do you have the feedback that you expect to have? Or do you have to own that, I think, if we're thinking about marks, or like integration that we are talking about, I see that sometimes we need, we don't need, we need to mock, for example, I see some some sometimes mocking as a way to increase, let's say, have some sweet, a test that takes longer to run. Sometimes, you can just try to reduce the time that he's running, trying to mock things out dependencies that take too long and etc. But still, I think that's a trick question. Like, if you should do or not, I think in the end, in the end, it would be a mix of both.

 

OB:

…. like the amount of mocks, also kind of important because it's also going to be a case that you have to spend quite a lot of time to mock everything. And it's really like just blown and blown away and what kind of value you have from this test. And there's like many, many, many variables like cases force and like, this is just me becoming no..., there's a couple of dependencies, it's one thing, but if you need to spend half an hour to set up everything and for every test and isn't

 

MB:

it by the way, sorry on that, by the way, there is a at pattern that is called excessive setup that exactly describes this like exactly that.



MA: 

If I can also say something else about mocking. So for me, the trick in writing a good test is to make sure your test exercises what you want. And it's not really bothered by other details, right. So I'm going to give an example that is really close to my heart right now. So our application uses a lot of caching. And but in most of our tests, we don't care if the information comes from the cache or, or if it comes from the database. So the caching is something that for me should naturally be mocked, right, or should not be part of what I'm trying to test in most of the cases. And, for me, the only way I see this is to, you know, well mock the cache. Otherwise, I would have to put the cache up, you know, my tests will naturally become flaky, because you know, cache fails from time to time. So you don't have a lot of control on this. So I think it's also about this, making sure that you're also making your test what you want and not more than that.

 

FC:

Yeah, I agree with you, I agree with you. Regarding to for example, if we go to more mugging in the minute, the approach that we have followed my my colleagues, and at the beginning, when we started to testing, you know, it's something that is very unfortunate is not so spreed in some areas of the embedded world, because we do a lot of debug debugging. So friend, you asked me, which is the difference between embedded and embedded. So if we talk if we go to the example that Melissa was putting out about the reaching the database, and when we talk about the method, systems touching the hardware is reaching our database is the same. So at the beginning, we we started to mock really all the collaborators for a single module or the unit under tests. But that leads to highly coupled to implementation tests that that Gao said to us a lot of pain. And our approach now is to try to do not mock anything until the end, when we are reaching them. Again, wait, let's say they get way over for the database or the to the one and only use mocks or collaborators or test doubles on modules that are very complete complex, you know, if they are very complex, another kind of of test double that or mock, I am standing to the test doubles, that it's useful and thema tails has pointed out is the fake object, right? The fake the fake object is one of the best things in order to accelerate or to speed up your tests. Right? You are testing something that is not related with the outcome of some module that takes a long time to execute market. Yeah, replace it with anything. And go ahead, you know, case, we are using these a lot.

(continuará…)