Had you ever had a chance to play with some LEGO bricks?

A group of colorful building blocks

Description automatically generated

If you had, you’d probably noticed how convenient those were and how we could always create something new and different out of the same brick set. The LEGO company will soon celebrate their 100th birthday (founded in 1932.), which tells us quite a lot about the success of a company and their products.

What could be the secret for such a big success?

A yellow toy figure and a red square with white text

Description automatically generated

Quoting Greg Beyer from TheCollector: “The success in the history of Lego is the focus of it not being a single toy but an entire system with each set using pieces that can be interchanged with other sets. It is a construction system that also allows people to build incredible things from their imaginations”. In other words, one of the key reason for such a great popularity of these bricks was (and still is) versatility. With many parts, which can be easily re-used, due to the inter-compatibility of all the produced sets, one could create endless variations of things, limited only by their imagination. This inter-compatibility can also be found in most of the microservices solutions deployed out in the clouds these days. We can realize this by looking at each microservice to conclude they are all communicating through a standard communication interface (the most commonly used one is the REST API), which means that no matter in which technology stack a microservice has been developed, it can still communicate the same way as other microservices can and thus, it can be re-used in different microservices solutions, just like a LEGO brick.

“Do One Thing And Do It Well”

A close-up of a bottle opener

Description automatically generated

An interesting feature of most well-designed microservices solutions can be commonly found in the UNIX philosophy, which was originally developed by Ken Thompson, who worked at Bell Labs for most of his career, where he designed and implemented the original UNIX operating system. One of the leading design considerations, taken from that philosophy was, in summary, “Write programs that do one thing and do it well”. In other words, write a new program for each new feature, rather than overbloating an existing program with new features. Shortened as DOTADIW (Do One Thing And Do It Well), this principle will prove itself to be an integral part of a well-designed microservices solution. By writing a bunch of small, well-tested, easily maintained, command line tools, which are inter-compatible (they all share the same communication interface, using UNIX pipes and input/output redirections), UNIX provided us with a set of LEGO-like command-line tools, which, combined in various different compositions, could help us design some quite complex solutions for almost any problem we could imagine, in the digital world. Any issues, that could occur when assembling these LEGO-like tools into bigger compositions, would most probably be found in the compositions themselves, given that all the parts of those compositions are each quite small, concise and well tested units. It means that catching and fixing bugs in such compositions is usually a fairly easy task.

Composing a symphony

Diagram of a fan and fan parts

Description automatically generated

We can find a similar analogy in the functional programming, too. In computer science, the functional programming is a programming paradigm where programs are constructed by applying and composing functions or more precisely – pure functions. The difference being that “purely functional programming consists of ensuring that functions, inside the functional paradigm, will only depend on their arguments, regardless of any global or local state”. In other words, when a pure function is called with the same arguments, it will always return the same result, i.e. its result is not affected by any state or side effects. This feature is also an important characteristic of a scalable microservice, since this statelessness allows us to easily scale a microservice up or down, dispatching incoming requests to the microservice replicas randomly, without having to worry which replica will process the next request. These stateless (pure) functions are usually written with the DOTADIW principle from UNIX philosophy in mind, namely, short and concise, easily testable and easy to maintain. But a composition of such functions is just like a LEGO creation. It enables us to design quite a powerful and complex functionality. One such example of functional programming can be found in Microsoft .NET LINQ library. Each of the LINQ extension methods are small, easy to understand and well-tested. Combining them into a composition, we can create complex solutions that are easy to maintain, considering that the only thing we need to care about (test and maintain) is the composition itself. The parts of the composition have already been tested and are usually bug-free.

So, what are microservices?

A computer generated diagram of a blockchain

Description automatically generated

To quote Google: “Microservices architecture (often shortened to microservices) refers to an architectural style for developing applications. Microservices allow a large application to be separated into smaller independent parts, with each part having its own realm of responsibility”. In short, it’s an architecture applied to a software solution design, that suggests a composition of small services into a bigger and more complex solution.

Also, quoting Chris Richardson from microservices.io:

Microservices – also known as the microservices architecture – is an architectural style that structures an application as a collection of services that are:

  • Independently deployable
  • Loosely coupled
  • Services are typically organized around business capabilities. Each service is often owned by a single, small team.

It took a micro to get to the macro

A person standing in front of a building model

Description automatically generated

The main motivation for looking into a microservices architecture, when designing a software solution, is obviously the scalability. Software solutions becoming bigger and more complex, demand more resources, simply in order to be able to successfully serve the ever-increasing traffic, generated by the end-users. We can overcome this by buying and upgrading more powerful and expensive hardware or we can do it by distributing the load among many different machines, which don’t need to be that much powerful nor expensive. The problem is when we need to scale some of the resources, like one table in the database, but not the others. This is the point where a need for the correct segregation of functionality comes into the play. While developing monolith and SOA applications, developers have realized they would need to scale their resources unproportionally and inefficiently, wasting those (and money), so they had to find a more efficient way to scale their resources. Microservices architecture provides us with an efficient and virtually unlimited scalability. Efficiency, in terms of utilized resources, means that each time we have to scale our resources up or down, we can do that by scaling only the small parts of the system which require scaling. Sounds pretty cool, right? Well it is, if done properly. Otherwise, we’re probably going to end up like many others, ranting against microservices, arguing that it adds too much complexity in otherwise simple solutions.

Developers! Developers! Developers! DevOps?

A diagram of software development

Description automatically generated

With the number of microservices constantly increasing, as well as the number of deployed instances of each microservices (due to the scaling requirements), we usually need some tools to automate the process of deploying, updating, testing and monitoring each microservice. So, it’s not unusual to have a team of people supporting and managing our ever-growing infrastructure, i.e. a devops team. Working with the containerized applications seems to be a natural fit for microservices deployments, considering it provides us with a convenient way of deploying multiple instances of the same microservice (replicas) to support scalability. Also, orchestrating those deployments can be automated to have a smooth experience while deploying upgrades to microservices, with zero downtimes (using rolling updates) and we also need some monitoring tools, in order to be able to conveniently keep an eye on any issues that can arise in our deployments. These activities represent some of the typical devops responsibilities related to the microservices solutions, which also add more cost to our microservices projects. Coincidentally (or not), cloud providers these days, with their service offerings, are very well aligned with the microservices architecture and are the main protagonists of this design approach. They offer fine-grained services which can be mapped one-to-one into the parts of the microservices solution. This has resulted in a new concept named “Serverless computing”, where all the parts of the software solution are implemented as certain cloud services, so that entire solution is practically implemented in the cloud, without any “external” machines, hence the name.

Was microservices a bad idea?

Many people are dissapointed by microservices, given their bad experience while working with those. Their arguments mostly focus around perceived unneeded complexity. The main stumbling block in the way of a good microservices solution design is a correct segregation of functionality. Just like in UNIX, we need small, independent tools, which do one thing and do it right. Eventually, we might notice some patterns of frequently used compositions, which might prove more efficient if implemented as a standalone microservice. In other words, it might be beneficial to periodically group some related microservices into a bigger microservice, without ruining the functional segregation, because otherwise it will lead to the point where most of the effort is spent in maintaining such a big number of small parts. When we keep these things in mind while trying to segregate the functionality of our solution to smaller pieces, microservices design will prove itself to be a straight-forward activity with beneficial results. Of course, it should be obvious that microservices are more suitable for bigger and more complex software solutions, considering the main benefits they bring. In a case of a small software application, it might be easier, quicker and cheaper to build one monolith app. Also, if we’re in a startup business, trying to make it to the market as quickly as possible, with a proof-of-concept functionality, in order to be able to raise the capital for furhter development, it might be an overkill to start with a microservices approach. At early stage we’re probably trying to quickly build something to showcase and validate our idea on the market, so taking the microservices route might actually slow us down instead. It goes without saying we shouldn’t rely on the silver-bullet solutions for each and every problem we’re trying to solve. As we don’t really go around our house, fixing every possible problem, just with a, say, screwdriver. Not all problems can be solved using just a screwdriver, obviously, so it shouldn’t be surprising that microservices are not a one-size-fits-all solution.

Blog

Had you ever had a chance to play with some LEGO bricks? If you had, you’d probably noticed how convenient those were and how we could always create something new and different out of the same brick set. The LEGO company will soon celebrate their 100th birthday (founded in 1932.), which tells us quite a […]

TIAC is celebrating a significant milestone this year – 20 years of existence, which was crucial in carefully planning our big team building, main team building event of the year. This year it took place on the second weekend of June. When selecting the location, it was important to provide our employees with great adventure, […]

Optimizing Memory Handling Using Visual Studio Profilers In the world of software development, memory management plays a critical role in the performance and stability of applications. For .NET developers, understanding and optimizing how an application handles memory is not just an optional task – it’s a necessity. This blog delves into the importance of memory […]