Chris Baumbauer: Personal Musings

Blogs

Ordered list of blogs will go here with a widget

Monoliths and Modules (or Microservices and Services)

Posted: Sep 10, 2019 3:15 am


Monoliths and Modules (or Microservices and Services)

Depending on when one started in the computing field, they would have come across at least one of these patterns for software. In the beginning, the only thing that was known was the monolith. It had various names from the application that ran on a time shared based multitasking system, to the real time process. It is the single application that did everything from wax floors to slice bread.

At some point, this single application became very complex. Tracking how it worked became a nightmare because it was self-contained. Maintaining it was very delicate work, and changing one minute piece of it would result in waiting months for various regression tests as well as environment promotions to run its course not to mention the high risk of impacting something very innocuous. In addition, being able to reuse parts of it in another application would require copying and pasting blocks of code with the hope that the correct incantation can be made to perform the same functionality.

It is in this cauldron that modular development was born along with its post-web age child microservices. The idea being to break the monolith up into a core component, and pull in the functionality you need as an external module. This started out with static and shared libraries for applications, kernel modules for operating systems such as Linux, MacOS, and Windows. Lately this has included microservices for web based applications.

Lately the battle seems to involve whether to split off an application into microservice to incorporate the latest agile methodology of small teams and focus, or re-incorporate some of the microservices into a traditional service. I wish I had a good answer to that as this battle has been waged since operating systems became modularized See the LWN article describing Andrew Tanenbaum's comparison of Linux vs Minix.

When it comes to any form of architectural design, everything has trade-offs. Keeping things monolithic, means that you only need to worry about that single blob for testing and promotion. Observability and debugging the single application is much easier as well since the debugger has a very clear path of the execution flow. There are no interdependencies that need to be worried about that could result in unexpected failures or performance hits of needing to load in external libraries, or talk to a remote resource. However you can run into a very large code-base, downtime for deploying a new binary regardless of the change, and a full testing regiment for the application.

With microservices or modules, the testing footprint and code-base can be minimized substantially. In addition, you get added flexibility of plugging in external libraries or services if they adhere to a published interface (the API). Lastly, you gain the ability of updating just the service or module with minimal downtime of the system overall within bounds. Granted for a standalone application, unless it can support the dynamic reloading of libraries, it may need to be restarted, but microservices do stand alone. One of the biggest drawbacks is that the complexity of your system just skyrocketed. For stand-alone applications, versioning of shared libraries is a big issue especially when shared between multiple applications. This had a name in the Windows world in the late 90s: DLL hell. Don't worry, this isn't just a Windows issue, but can occur with MacOS if you use something like homebrew, or Linux. Even web frameworks such as Rails and Django are susceptible to this. In the microservices world, this is more acute because there is no way to use an older version of a microservice unless your organization practices the concept of blue/green deployments or versions all of their endpoints with something like /v1/foo.

Microservices has its own tradeoffs as well. They are very easy to scale up based on load. However, because rely on remote procedure calls in the form of HTTP requests, it requires additional infrastructure to be in place for discovering and proxying requests so they can find the right service. In addition, the service can go down at any time, causing cascading failures. Lastly, debugging an individual service maybe doable, but attempting to debug the system in aggregate can be very difficult.

You may have noticed that I avoided the coding arguments about easier to test and maintain. I classify them as misnomers as both can have some very bad technical debt allowing them to be unmaintainable. In addition, while testing the testing footprint for the module or microservice is smaller, this is offset by the need to have more integration tests to ensure proper coverage.

Summing up, the dance between monoliths and modules is not new, nor will it end anytime soon. It is fascinating how technology always seesaws between extremes especially when a new generation comes into its own. The battles being fought today were fought when I started out about 15 years ago, and they were fought about 15 years before then. It is funny looking at today's microservices built using the spring boot framework, Go, Ruby, or Python where they ship their own web server or pre-compile/link in binaries as the new monoliths while the serverless methodology is coming into vogue to allow for mapping a URL endpoint to a function method using something like Knative, Kubeless, or Amazon Lambdas are the new modular components.

Topics: history, microservices,


Return home