Microservices: Brownfield: Reporting

 icon-arrow-left Microservices: Brownfield: Transactions

The support reporting for our micro-services architecture system can be a little complex. At difference to a monolithic architecture where you may have a few databases to report from, in the micro-services architecture system, you will have a tons of databases to report from since each micro-service will have its own database.

The report of the data will come from the be split across multiple micro-services and since there is no a central database where you could extract this information you may need to join data across databases. Also, in the micro-service architecture system, reporting can be slow. 

One way to facilitate the reporting is to have a dedicated reporting micro-service which calls all our micro-services and takes care of collecting and consolidate the data. The only disadvantage is when we are reporting large volumes of data or we wish to obtain a report in real-time.

Another way is to have a data dump which is having the micro-services dumping the data to a central database that later can be use to do reporting. 

 

Share

Microservices: Brownfield: Transactions

 icon-arrow-left Microservices: Brownfield: Migration: Database | Microservices: Brownfield: Reporting icon-arrow-right 

When moving from a monolithic system to a micro-service architectured system, we need a different approach when dealing with transactions.

Transactions are useful:

  • They ensure data integrity.
  • They allow us to updates several records as part of one transaction.
  • If one or more updates (and/or creates) fails, we can roll the entire transaction back.

In monolithic transactions are simple. We can have one process which is updating and creating records. These records are part of the transaction; therefore, the same process can either commit the transaction or roll it back if there are any issues.

In micro-services, transactions spanning are complex because there are several processes. This means that several micro-services are involved in complete one transaction. Since our transaction is distributed along multiple micro-services, it becomes a complex procedure to observe and solve problems; therefore, it becomes complex to roll back.

For example, we can have a order being place. This process will take several micro-services working together.

If one of these micro-services fails when trying to create or update a record, we will need to rollback the entire transaction.

How to handle fail transactions:

  • Option 1: Try again later.
    • The part of the transaction that failed is put into a queue so another service can pick it up and process.
      • Transaction will eventually be completed.
      • It relies on other instances not failing with the same part of transaction.
  • Option 2: Abort the entire transaction.
    • We detect our transaction has failed, then we issue an Undo transaction to all the micro-services involved so they undo any creates or updates
      • Problems:
        • Who issue the undo transaction?
        • What happens when the undo transaction fails itself.
      • One way to overcome this problem is to use a transaction manager software.

        • This software use a two phase commit.
        • Phase 1: All micro-services involved indicates to the transaction manager if they are fine to commit to their part of the transaction.
        • Phase 2: If they are fine to commit, then the transaction manager tells all participating micro-services to commit the transaction.
        • If any of the micro-services doesn’t respond or responds with a “no to committing” then the transaction manager tells to all the participating micro-services to rollback the transaction.
        • Problem using transaction manager?
          • We are heavily dependent of it. 
          • It delays the processing of our transactions. Potential bottleneck.
          • Complex to implement.
          • More complex when we need to communicate with a monolithic system.
            • This can be accomplish with the message queue.

 

Share

Microservices: Brownfield: Migration: Database

 icon-arrow-left Microservices: Brownfield: Migration | Microservices: Brownfield: Transactions icon-arrow-right 

In this section, we are going to go over splitting the monolithic database into databases that will be used on each micro-service. In this way, each micro-service will have its own database which makes it easier to maintain and part of the whole micro-services concept.

As establish on the previous articles related with micro-services, we want to avoid shared databases. We want our micro-services to be as independent as possible. In this way, they can be independently changeable and deployable. A shared database limits us and makes our micro-services dependent.

The approach to split our monolithic database into micro-services databases is similar to split the code into bounded contexts as explained on the previous article, Microservices: Brownfield: Migration.

We split seams in the database which are related to seams in the code. In other words, we can take the tables that are related with our code and move them (or recreate them) into the new database. In our case, all the tables needed for the account functionality, will be taken from the shared database into the single database exclusive used for the account micro-service.

Note that in the process of moving from monolithic to micro-services, we may have to modify our the data layer of our monolithic system to access multiple databases.

A question may cross your mind which is, what do we do when we have a table which is linked across seams? For example, you may have a promotion which is linked to an order. So you have two services, the promotion service and the order services working together. Then, we must provide API calls which allow us to fetch the data for that relationship. In our example case between the promotion and the order, we will have the Order service fetching specific data from the Promotion service. 

Remember that we are refactoring our database into multiple database. We must worry about data referential integrity. This means that if we delete an account of a customer, for example, we might have to take care of orders related with that customer. Those orders exists in the Orders service. We would do this by calling the method in the Orders micro-service which would instruct in our example case to delete or disable specific orders related with the specific account ID that was deleted in the Account service. We must ensure that our micro-services talk to each other in order to keep the data referential integrity. 

In the case where we have static tables that are required by all micro-services, The best action is

  • Make that data into a configuration file available to all micro-services.
  • Or, have a specific micro-service just for these static tables.

The same principles apply when you have valid shared data that is read and written by multiple services. You move the data to a configuration file or you create a micro-service that can be used by the other micro-services.

 

Share