Hi. I work in a fairly big firm on the project developed by 12 programmers. The project is quite large (more than 20 maven modules) and its main problem is the almost complete lack of unit tests (there are 300 tests, but almost all of them on integracom level and low quality). Team leader and the company understands that having tests is a good thing and necessary, but takoushi moment do not want to introduce compulsory writing unit tests each developer (almost none of them had any experience writing tests, plus a too tight time frame to allow "imbued with" the project developers to spend time NOT writing new features).
Now, we try to make separate writing unit tests. Разработчик1 (tester) and разработчик2 (the offtaker) will receive a detailed functional specification (use-say, description, business logic, etc.), the tester is responsible for writing tests, the implementer, respectively, says the implementation of the tests.
This approach entails a lot of different disadvantages (for example, the tester is not receiving feedback from writing code, the tester may not understand Yuz-cases, and thus write the wrong test, the implementer without the tester can write testing-friendly code, etc).
The main thing that the boss requires is to let разработчику1 to write just the implementation and разработчику2 tests only.
Question: did anyone in your organization, anything like that? What problems they faced and how they solved?
We write code and tests at the same time.
Once, three months ago, we had a lot of code, but a single line tests. Code really a lot.
And everyone just started to write tests for the functionality that he knew better. it is clear that we have agreed, "Hey! I'm going to write a test for this method, okay?" and then off we go: now everyone just writes tests for code that changes or adds.
Frankly, I see no reason and need for division of labor — to write test and implementation of different people. What's the point?
Then, the tester — he's a QA, we are not versed in "your programming", so to write unit tests(we use rspec) it is impossible in principle. Ideally, the scheme should work like this: you put a task, you write a test that covers 100% of this Tusk, and then write your code, periodically running a test to see what happened. But this is the ideal. Unfortunately, this scheme does not always work, and especially does not work if you already have 5 meters naidennogo code where test coverage at the level of "the horse and thought to lie"
aiden answered on October 8th 19 at 02:22
Poor, poor implementator test... programmers naganokumas in connection with "a rigid timetable" and he will try to sort this out, not to mention the fact that programmers can't just create a minimal API to implement the tests. Writing tests should be the one who writes code because he knows him better than anyone and can always refactor it in order for the tests to write.
You should at least start with the fact that to write tests for the code you just wrote, and gradually for that already written. The one who understands why you need the tests, writes them without reminders, because he knows that one or two days spent on writing tests will replace weeks of debugging, including will help to find such errors that can show up in a month or more use in productions. But it's hard to know if he not gone through it (I was like). At first, I was forced — and I wrote them. Now I can't imagine how you can write code without tests, for me it is a necessary margin of safety.
Speed writing tests are also affected by the framework, so learning its use also plays a significant role.