In my Digital Library project a new Api endpoint is needed. This job is tracked here, implementation is happening on this branch. In this post I’d like to dump my decision process during implementation. In the recent months I was coding a lot and I recognised a few bad habits.
What is the big picture? Where this function will be used? Currently, it will feed a UI grid for users to manage Dimension Structure entities. Right now there is no plan to publish this endpoint for any other usage.
What is the purpose of this function? It returns with list of Dimension Structure entities. If there are no entities in the list then empty list is the result. Any error happens at server side it returns with a Bad Request including exception.
Drafting Api code, meaning pulling together any necessary code, e.g. infrastructure, service basics, DI or whatever is needed, to have a kind of working implementation. It is required for being able to execute test cases. In this case I had to clean up the codebase a little bit because there are two domain layers using the same entity (obvious sign of introducing a new entity type) and I had to separate them. First a few commits contains this.
Create integration tests, or in other words, find the test level which is the most valuable in this implementation. This point is about finding the balance between fast pace development and quality. There will be another post about this later. In this case there will be only a single test case.
(Note: Since all above happened while I was in the in the best coffee shop at Budapest I disabled continuous testing for reducing battery usage, meaning the code compilation happens when I press the compile button and not for all save action. As a result, I realized that moving test files to another directory cause compilation error a few commits later, after I pushed the code when the server build failed. I should have compile the code before commit and push. I had to fix this. As a consequence, compilation error hid a failing test which also required fix in order to get a green build. Fix.)
And this is the point where I realise I started the implementation with the wrong function. In order to be able to create test data I need to have the Add function too. Add function will be implemented without tests, because even if it does something unfit for requirement (we don’t know the requirement yet), the point is having the data in the database and list it. Luckily, Dimension Structure entity doesn’t have any complicated which might have effect on listing logic. So, this way of doing things feasible enough.
Implementation. The implementaion is done, and it contains changes on many levels in the codebase, such as interfaces, business logic layer (kind of repository layer), http client and tests, and so on. You can review the changes between commit 3d1cd9ab and c82e3772. Even in a little codebase like this, introducing a new Api function cause nice amount of impact on multiple places.
There are lessons from this phase. First, when I have continuous testing enabled in Rider sometimes the dlls get cached and as a consequence issues remains hidden. Once a full build is done these issues can be discovered. Preventing bugs going up to origin by push I have to do a full “clean solution” ==> “rebuild” ==> “executing all tests” round to be sure the code is ok and the server build is not going to fail. Once I have enough experience with the caching like phenomena I’ll put together a solution for JetBrains and file bug.
The other lesson is also coming from continuous testing. I use the “Cover new and outdated tests” option meaning that not all tests are executed by save action. This way of doing things, in one hand one of the fastest continuous testing enabled method, on the other it hides test parallelization issues. A “clean solution” ==> “rebuild” ==> “executing all tests” round discovers these.
You can see all the changes on this page, where the commits, tags created by successful builds are displayed in an easy to understand way. You take a look at it and can see what happened. No time spent on parsin, collecting the information because it been done by the tool. You can focus on making decision.
The other nice feature of Azure DevOps is that, the ticket tracking this work has all the information, so it is easy to track what happened. As manager, responsible for delivery, really valuable the high level of traceability and transparency provided by this tool. But, again, it is my preference of doing software delivery, and it is not a MS sponsored post.