What I’ve learned today?

I picked up a new habit recently, (By definition it can’t be called as habit, because I have been doing it only for 2 weeks, but… who cares!?) which is about collecting the new things I learned a given day. This daily, mainly the last things before sleep, exercise helps me to think through the day and filter the events of the day in multiple aspects.

I’m looking for data. It is the factual part of the events. I’m not really into factual data, but it is an aspect I need to deal with. Not to mentioned that being able to recall factual data forces my mind to really go into details.

I’m looking for references. If I watch a video about human body’s immune system, probably, I won’t memroise all the info I get from the material. Rather, I’ll remember the topic, some details, a few keywords and I create a reference in my mind to the video. Whenever I want to know something about immune system I can find this video and use it as a starting point.

I’m looking for connections or conclusions. Being able to recall a logical order of thoughts helps me to confirm whether the given process was good or not. It also creates mental notes to do things better.

Blazor – so far so good!

During my career I met different technology stacks and I poke them either as tester (either manual or test automation engineer) or just delivered software (basically didn’t give a damn what technology it is) as manager with the teams I worked with. The list includes php, java .Net and the mixed world of Big Data where everything is there in various quality. During the years I realised that .Net is the stack which gives me the most comparative advantage in my project and happiness in development.

The client side was always a pain point. HTML and CSS… meh… Javascript, what a huge pile of shit! As a consequence, I always avoided building the UI part of my app or when I did I gave up after a while because I just hate the fact I need to touch javascript/Typescript.

Angular 1.x eased the pain a little bit. Angular 2.x made Typescript default and we were close to what strongly typed languages can give you. But, still, webpack, linters, treeshake, npm hell! and other black magic… not to mention that the landscape changes in every 9-12 months or faster. It is an innovation hot spot and it is natural. But, still a clusterfuck.

Microsoft introduced Blazor in 2018. The promise is that you can use C# at both client and server side. Since .NetCore 3.0 is out I have been working with Blazor Server hosting model. The experience is great! C# is everywhere. I can debug easily my app using Rider (it still lacks proper support for Rider) and no struggle with contracts between JS app and server side. You just can share your domain objects and happiness.

Blazor still lacks features are default in Angular, but guys at Microsoft working hard to make Blazor a very compelling option to anyone.

 

My journey at IBM – Part II.: Learning opportunities

There is a site in IBMs intranet, called YourLearning and it is about what its name suggests.

There are very few things you, whoever you are, have to know about me. One of these is that I’m a knowledge junkie. I need to learn something new every day, and I need to expand the boundaries of my knowledge on daily basis. If it doesn’t happen, then I’m getting frustrated which is not good.

Again, I had preconceptions about IBM. I worked for a few multinational companies so far, all of them told me that they support learning, but there was a catch. There are no enough licenses, a subscription only for the crappiest learning site, or begging for a book costs 20 USD endless time so eventually you bought it for yourself. So, I started to browse IBM’s Your Learning site having the experience above in my head. And, suprise, I was wrong, again. Very wrong.

Just browsing the content I found thousands of available materials in multiple topics including every role available in a huge organization like IBM. Ok, sounds valid. After that I found a book and click… I was redirected to Safari Books. My heart almost jumped out from my chest! High quality technical books for whatever the hell you want to do in technical field! Because, I work for IBM I have access to this, and it is free.

Later as I got more familiar with the portal I found the learning sources section. Harward Business Publishing. It is the same in business/management field as Safari Books in technical field. Moreover, it is primary source for business literature. All the new stuff published here. In their digital library all the HBR articles are available.

Beside the two above, and the other I haven’t discovered yet, the portal helps you to track your progress in learning. Whatever you learned will be part of your profile and history, and you’ll get badges.

All in all, in my onboarding process I watch a video where Ginny Rometty talked about a journey in IBM where learning is essential. That time I had only the preconception I described above, now just scratching the surface of Your Learning portal I start to understand that IBM takes seriously learning. It points back to my expereince during interview process. With these resources I lot can be achieved…

And there are more I haven’t discovered yet.

My journey at IBM – Part I.: The People

I started my career at IBM Budapest Lab in mid February, 2020, as engineering manager. The reason I choose them is the people I met during interviews. I had my concerns regarding IBM, big company (~300 000+ employees), possibly big intertia (nothing happens or just slowly), rigidity and possibly uptight managers. It turned out that I was wrong. Very wrong.

The fact is that one of my friend and former colleague works at IBM Budapest Lab and he told me that the vibe is similar to what we had and loved in our previous job. “IBM, oh, yeah, sure…”

A few weeks later I had face check interview with 3 managers, now one of them is my line manager and the other two are my peers. I was surprised because all of them were nice, correct and professional, and the meeting ended up without any red flag from my side. No ego flare, no sign of power games, but clear interest who I am. All the topics were about how I deal with people, and no question whether I can write code this and that language. So far so good.

A few weeks later I met two other managers from the US and Canada. The result from my side was the same. No red flags, no points where I have to compromise my basics values. Moreover, it turned out that these managers are people focused and clearly understand that people are the key of success. This was the moment when I felt that I want to be member of this team, and work in an environment where people are more important than technology.

There is another picture built up in my thoughts while I was doing the interviews with IBM folks. Even though they are mainly people with strong technical background, it is clear they have a lot of knowledge and experience how to manage people. This complex knowledge was displayed by that they were able to translate my answer to their own language, and they double checked the meaning by asking questions. It is sign of paying attention to what I said (this is one of the hardest part of a manager job) and matching to their set of knowledge. One is not capable of this just having superficial knowledge. On the other hand, if the senior manager represents this quality the organisation will follow. This was convincing.

During my first day I met with both of the teams I’m going to work with. My first impression told me that there are no rockstar developers, no drama queens and no signs of ego issues, just rather humble team players. Later turned out that they have good sense of humor too. Wow! Kudos for the local managers!

 

Back to Github

When I started Digital Library project I gave a chance to the project management services provided by Github and they failed. Mainly because I didn’t know them and didn’t have enough patience to pick up some new knowledge, I wanted to move forward. In the following a few weeks the project got up to speed, so my goal was accomplished, and in my thoughts there was place to integrate something new. During this a few weeks I started to follow Microsoft’s open source projects, especially aspnetcore, dotnet/runtime and entityframeworkcore. As I read their tickets and pull requests I slowly understood how they integrate Github and Azure DevOps Builds. The result is that I put together a test project and tried Github – Azure DevOps integration in the most common scenarios I’m going to use. The result is that, finally, I started to use Github as I originally wanted.

One of the reasons I wanted to run my project fully on Github is self PR. Even though I don’t consider essential being on Github in my career, I’m an engineering manager and not a developer, but still being on Github and being able to display some professionalism might result something positive in the future. The other aspect is that, making your professionals transparent is a responsibility I have to deal with.

The lessons… First, you just have to understand when you are ready to absorb new knowledge, which may result some struggle and not the progress you seek. There is no such thing you are always able to integrate something new in your structured knowledge. This ability can be crippled temporarily by other priorities. In my case the need to be able to move forward was more important.

The other lesson is that you have to be able to define what is important for you. The way I wanted to manage my project was based on an earlier experience (14 developers, 4-6 feature developed in paralell, feature branches and multiple supported versions). My problem space is way simpler, and I was needed some time, and possibly clear head too, to understand it. Again, the need to be able to move forward clouded my judgment.

Anatomy of implemting a REST Api endpoint – Part Three

This writing is about how Azure DevOps can support software delivery.

So far there is a feature ticket about Dimension Structures CRUD functionality, and listing and adding new functionalities already implemented. Let’s see how Azure DevOps implement transparency in software delivery.

Dimension Structure CRUD ticket can display that to how many chunks (story tickets) this job was splitted and the status of these tickets. This way possible to track the progress of implementation without being distracted by lower level, development related information. Beside these you can use deadlines, risks and effort fields to track your progress, but, since I’m the only one working on this project I don’t use these.

Screenshot 2020-01-15 at 17.19.20

A story, attached to a feature as child, can show all the delivery related details you need to know to be able to see what happened. In my case the folowing data is displayed in a story ticket:

  • On which branch the implementation happened.
  • Build related information. There are two types of builds, one is just build, the other one is “Integrated in”.  You can attach multiple from both. I use the first type to mark the build where the full implementation successfully built on the server. Test results are displayed on the build page. In case of bigger projects it might be overkill. For me works fine. I use the “Integrated In” links to mark the build where the particular implementation went in to master branch. Master is the release branch.
  • Tasks for tracking different development related activities, and where you can track your time and remaining time. At this level also possible to mark a build where this job was completed/integrated. It is useful in case of bigger team where multiple developers work on a story. In my case, it would be overkill, however, builds automatically attached to the task.
  • Pull Request, which contains all the info needed to be able to review and merge the code change into master or other branches.
  • Test related stuff, I can’t use this part of Azure DevOps because I don’t use VS so I don’t know yet how to connect test cases in a dll to test cases in Azure DevOps. However, it is a really powerful feature and increases transparency.

Overall, when I put the delivery manager hut on my head and I want to know what is the status of the team deliveries then a well managed board can help a lot, especially when the tool, in this case Azure DevOps, does the majority of the work for you.

The master branch looks like this. Clean and lean…

Screenshot 2020-01-15 at 18.06.03

Feel free to dig the tickets in my Digital Library project.

Anatomy of implementing a REST Api endpoint – Part Two

As I finished the listing functionality of the grid I started to implement the add method of the endpoint. The listing function implementation from the pull request point of view seems pretty good. The commits lead through the reviewer what and why happened. It is easy to review. But, implementing of add feature went sideways. I was distracted by other things during development and couldn’t pay enough attention to commit the code whenever it is logically right. The result is a few commits and a huge one at the end.

It might seems acceptable to have a pull request like this. In my opinion, not really acceptable. Were I the reviewer I would have a short discussion with the developer and try to understand why a commit discipline wasn’t applied.

Let’s take a look at the last commit. It contains implementation at both, client and server side. At server side it contains changes at multiple places like MasterDataHttpClient, Controllers, business logic (kind of repository) and validation. Were I who reviews this I would say a few WTFs, because the way we write code is not about for the computer to understand it. The computer understands the code if it is in a single line without spaces, basically impossible to understand for human being. The way we change the code, the way how changes should be introduced by commits, the variable names and so on must serve the easy understanding of another human being. In many cases, the reviewer. And the developer who is going to make changes in the code a few months or years later.

Anatomy of implementing a REST Api endpoint – Part One

In my Digital Library project a new Api endpoint is needed. This job is tracked here, implementation is happening on this branch. In this post I’d like to dump my decision process during implementation. In the recent months I was coding a lot and I recognised a few bad habits.

Steps

What is the big picture? Where this function will be used? Currently, it will feed a UI grid for users to manage Dimension Structure entities. Right now there is no plan to publish this endpoint for any other usage.

What is the purpose of this function? It returns with list of Dimension Structure entities. If there are no entities in the list then empty list is the result. Any error happens at server side it returns with a Bad Request including exception.

Drafting Api code, meaning pulling together any necessary code, e.g. infrastructure, service basics, DI or whatever is needed, to have a kind of working implementation. It is required for being able to execute test cases. In this case I had to clean up the codebase a little bit because there are two domain layers using the same entity (obvious sign of introducing a new entity type) and I had to separate them. First a few commits contains this.

Create integration tests, or in other words, find the test level which is the most valuable in this implementation. This point is about finding the balance between fast pace development and quality. There will be another post about this later. In this case there will be only a single test case.

(Note: Since all above happened while I was in the in the best coffee shop at Budapest I disabled continuous testing for reducing battery usage, meaning the code compilation happens when I press the compile button and not for all save action. As a result, I realized that moving test files to another directory cause compilation error a few commits later, after I pushed the code when the server build failed. I should have compile the code before commit and push. I had to fix this. As a consequence, compilation error hid a failing test which also required fix in order to get a green build. Fix.)

And this is the point where I realise I started the implementation with the wrong function. In order to be able to create test data I need to have the Add function too. Add function will be implemented without tests, because even if it does something unfit for requirement (we don’t know the requirement yet), the point is having the data in the database and list it. Luckily, Dimension Structure entity doesn’t have any complicated which might have effect on listing logic. So, this way of doing things feasible enough.

Implementation. The implementaion is done, and it contains changes on many levels in the codebase, such as interfaces, business logic layer (kind of repository layer), http client and tests, and so on. You can review the changes between commit 3d1cd9ab and c82e3772. Even in a little codebase like this, introducing a new Api function cause nice amount of impact on multiple places.

There are lessons from this phase. First, when I have continuous testing enabled in Rider sometimes the dlls get cached and as a consequence issues remains hidden. Once a full build is done these issues can be discovered. Preventing bugs going up to origin by push I have to do a full “clean solution” ==> “rebuild” ==> “executing all tests” round to be sure the code is ok and the server build is not going to fail. Once I have enough experience with the caching like phenomena I’ll put together a solution for JetBrains and file bug.

The other lesson is also coming from continuous testing. I use the “Cover new and outdated tests” option meaning that not all tests are executed by save action. This way of doing things, in one hand one of the fastest continuous testing enabled method, on the other it hides test parallelization issues. A “clean solution” ==> “rebuild” ==> “executing all tests” round discovers these.

Minimal grid implementation which can list Dimension Structure entities. Nothing special here. A little HTML with Blazor Server Side goodness and that is all.

You can see all the changes on this page, where the commits, tags created by successful builds are displayed in an easy to understand way. You take a look at it and can see what happened. No time spent on parsin, collecting the information because it been done by the tool. You can focus on making decision.

The other nice feature of Azure DevOps is that, the ticket tracking this work has all the information, so it is easy to track what happened. As manager, responsible for delivery, really valuable the high level of traceability and transparency provided by this tool. But, again, it is my preference of doing software delivery, and it is not a MS sponsored post.

Why Azure DevOps and not GitHub?

The reason behind why I use Azure DevOps for Digital Library project instead of Github, which is the kind of default place to host an open source project, is that, I know Github less than Azure DevOps. I used the latter for more than 5 years at Dealogic. At first, I thought that publushing my project and running it on Github is a great opportunity to know better Github, but I faced a question and I couldn’t find answer for it and I decided to move back everything to Azure DevOps. The case will be described later.

It is not a feature comparion.

I needed version management, build/pipeline, some project management capabilities and a wiki where additional information can be published. Both system can provide these. I already have an Azure DevOps account and my VPS machine is used as build server. I don’t have any special setup requiring special build capabilities, just I already had this machine for other reasons, and running out of free build minutes occured once, so I started using it as build server. It is provided by Contabo.

Azure DevOps builds can be easily connected to a Github repo. A few clicks only.

Every push triggered a build against the given branch. When a build was successful the source code was tagged by build number. This way of tagging resulted a release on Github which was something I didn’t want. I asked on Stackoverflow how it can be disabled, but no answer so far. Documentation doesn’t help at all. No conceptual explanation of Github processes and way of doing things. Since, deleting all releases/tags manually whenever there is a new build is not sustainable, I ditched Github.

Why I ditched Github instantly? Firstly, I already have a solution for the problem, so I wasn’t forced to figure out how Github works in this case. On the other hand, I want to spend my time to work on my project and dealing with infrastructure related questions and, eventually, making a compromise which I don’t want.

Azure DevOps can already have public projects, and authorization is kind of ok.

Digital Library Project

I moved all my code in my Digital Library project to a public Azure DevOps project and it will be subject of a few future articles. I’m going to discuss topics around software delivery in these articles.

Digital Library Project is an idea about how is possible to manage huge amount of documents and managing the relations between them.  It will be a wiki on drugs. Once it will be done.

The story of this idea is that, somewhere late ’90 I read The Pinball Effect book written by James Burke and it ignited my thoughts that time. But, I couldn’t do with the thoughts as that time I didn’t know anything about information science and programming. A few years later I studied library science with information technology and I got some insights how information can be structured, stored and managed in multiple ways. Kind of a solution for my thoughts a few years earlier. So, I went deeper in programming and other related stuff and I ended up in information technology as a tester and later manager.

Later articles I’m going to go deeper in details.