NodeJS Tutorial: Part 2

This is the continuation of the previous part of this tutorial [link] were we when over installation, basic settings, and package versioning.

Express: Setup

On the previous part [link], we showed the installation of the package Express. Express is a fast, minimalist web framework built for NodeJS.

First, we need to create a JavaScript file with the main part of our program. In this case, we create the file app.js with this line:

var express = require('express');

The method required give us a reference that points to the dependency package Express. However, this doesn’t provide us anything that we can use until we create an instance of Express.

var express = require('express');

var app = express();

Now, we can start working!

 

First Application

Lets build an application that listen to an specific port:

var express = require('express');
var app = express();
var port = 8100

app.listen(port, function(error){
  console.log("Running Server...");
  console.log("Port: " + port);
});

 Go to the command prompt and run your application as follow:

C:\nodejs\projects\test>node index.js
Running Server...
Port: 8100
 icon-warning  Use [Ctrl] + [C] to exit

I agree. This isn’t very impressive so far. Don’t worry. We will add stuff into it to make it more appealed.

Right now, before we move on, lets us go over Execution Scripts.

Execution Script

If you open package.json, there is the following lines:

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },

Each key-value under scripts, in this case “test” is the key and “echo \”Error: no test specified\” && exit 1″ is the value, is a command that can be use when using npm to run out application. 

C:\nodejs\projects\test>npm test

> test@1.0.0 test C:\nodejs\projects\test
> echo "Error: no test specified" && exit 1

"Error: no test specified"
npm ERR! Test failed.  See above for more details.

Now, imagine that you create multiple js files, let’s say 100 js files. How people would know which one should they execute first? Well, we can use package.js to add our own key-value that points out which js file should be executed.

Add the following:

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  "start": "node index.js"
},

Now, we can use npm start to run our application:

C:\nodejs\projects\test>npm start

> test@1.0.0 start C:\nodejs\projects\test
> node index.js

Running Server...
Port: 8100

First Application (Continued)

In order to interact with our application, we are going to need to introduce some entry point so there can be a communication.
Using the method “get”, we are going to provide a routing url “/” and the method that will be executed when we use such url.

This function will have two parameters, the first parameter will the the request and the second the response.
We get information from the request, and we provide information with response.

var express = require('express');
var app = express();
var port = 8100;

app.listen(port, function(error){
	console.log("Running Server...");
	console.log("Port: " + port);
});

app.get("/", function(request, response){
	res.send("<html><body><h1>Welcome</h1></body></html>");
});

Now, lets run our application and open our browser. In our browser, we are going to use the following URL: http://localhost:8100/

You could add other URLs as you see fit:

app.get("/contact_us", function(request, response){
	res.send("<html><body><h1>Contact Us</h1></body></html>");
});

app.get("/feedback", function(request, response){
	res.send("<html><body><h1>Feedback</h1></body></html>");
});

In this way, we can do the routing to different responses based on the urls.

So far, so good.

 

Share

NodeJS Tutorial: Part 1

Installation

Let’s begin by installing NodeJS. Go to the following [link], download the installation package recommended for most users, and while installing, select the default settings provided to you.

After installing, we must ensure that we can run NodeJS. We can do that by opening a command prompt (or terminal) and run the command line:

C:\>node --version
v.6.10.2

Create a folder where to begin working. In my case, I created the following folder:

C:\nodejs\projects\test>
 icon-warning In Windows, you may have to ensure that the environment variable PATH contains the folder where NodeJS was installed. 


Building The Foundations

To begin working, we are going to use NPM which is installed with NodeJS. NPM is short for Node Package Manager which is in charge to for the publishing of open-source NodeJS projects; as well as to be our command-line utility which aids us in dependency management, package installation, and version management. This will be our tool to create and maintain our project.

Let’s create a package for our project. Run this command line and accept the default values by using the key [Enter] in our keyboard:

C:\nodejs\projects\test>npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (test)
version: (1.0.0)
description:
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to C:\nodejs\projects\test\package.json:

{
  "name": "test",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}


Is this ok? (yes) y

Now that we have our package.json file created for us, let’s install a dependency called express. This dependency is code available for us to use in our project. Run the command line:

C:\nodejs\projects\test>npm install express --save
test@1.0.0 C:\nodejs\projects\test
`-- express@4.15.2
  +-- accepts@1.3.3
  | +-- mime-types@2.1.15
  | | `-- mime-db@1.27.0
  | `-- negotiator@0.6.1
  +-- array-flatten@1.1.1
  +-- content-disposition@0.5.2
  +-- content-type@1.0.2
  +-- cookie@0.3.1
  +-- cookie-signature@1.0.6
  +-- debug@2.6.1
  | `-- ms@0.7.2
  +-- depd@1.1.0
  +-- encodeurl@1.0.1
  +-- escape-html@1.0.3
  +-- etag@1.8.0
  +-- finalhandler@1.0.1
  | +-- debug@2.6.3
  | `-- unpipe@1.0.0
  +-- fresh@0.5.0
  +-- merge-descriptors@1.0.1
  +-- methods@1.1.2
  +-- on-finished@2.3.0
  | `-- ee-first@1.1.1
  +-- parseurl@1.3.1
  +-- path-to-regexp@0.1.7
  +-- proxy-addr@1.1.4
  | +-- forwarded@0.1.0
  | `-- ipaddr.js@1.3.0
  +-- qs@6.4.0
  +-- range-parser@1.2.0
  +-- send@0.15.1
  | +-- destroy@1.0.4
  | +-- http-errors@1.6.1
  | | `-- inherits@2.0.3
  | `-- mime@1.3.4
  +-- serve-static@1.12.1
  +-- setprototypeof@1.0.3
  +-- statuses@1.3.1
  +-- type-is@1.6.15
  | `-- media-typer@0.3.0
  +-- utils-merge@1.0.0
  `-- vary@1.1.1

npm WARN test@1.0.0 No description
npm WARN test@1.0.0 No repository field.

This will do two things. Install the package which are going to reside on the folder node_modules and update our package.json file.

Here is the content of the updated package.json file:

{
  "name": "test",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "express": "^4.15.2"
  }
}

You may notice that under dependencies, the versioning of express starts with a caret ^ symbol. This symbol have a meaning; therefore, let’s go over package versioning before moving forwards.

Package Versioning

Versioning consists of one or more sequences of numbers or letters. In this case, it goes like this:

<Major Release>.<Minor Release>.<Patch Release>

 

Major releases will break backwards compatibility. This means that the new features may not work with the old features.

Minor releases are new features which don’t break the existing features.

Patch releases are bug fixes and other minor changes.

 

The caret ^ symbol means that NPM will install any version of this package that is the same major version. In this case, ^4.15.2 could be seen as ^4.xx.x. This means that if 4.16 would come up, and we execute npm install (or update), then npm will install/update to the new version 4.16.

 

The tilde ~ symbol is another option which tells NPM to install/update based on the minor version. This means that if we have 4.15.2 and a new version is 4.15.3 then this last new version will be install (or update). However, it will not install for example 4.16.

You can choose to not use the caret (^) or tilde (~) symbol. Which means you don’t wish to take any upgrades. So writing 4.15.2 will tell NPM that you only want that version and no updates are needed.

There are other options and symbols that can be use with NPM; however, I am not going to go over on this tutorial. Therefore, I will just leave these two links where you can go deeper if you wish to: [link], [link]

 

 

 

 

 

 

Share

Microservices: Brownfield: Reporting

 icon-arrow-left Microservices: Brownfield: Transactions

The support reporting for our micro-services architecture system can be a little complex. At difference to a monolithic architecture where you may have a few databases to report from, in the micro-services architecture system, you will have a tons of databases to report from since each micro-service will have its own database.

The report of the data will come from the be split across multiple micro-services and since there is no a central database where you could extract this information you may need to join data across databases. Also, in the micro-service architecture system, reporting can be slow. 

One way to facilitate the reporting is to have a dedicated reporting micro-service which calls all our micro-services and takes care of collecting and consolidate the data. The only disadvantage is when we are reporting large volumes of data or we wish to obtain a report in real-time.

Another way is to have a data dump which is having the micro-services dumping the data to a central database that later can be use to do reporting. 

 

Share

Microservices: Brownfield: Transactions

 icon-arrow-left Microservices: Brownfield: Migration: Database | Microservices: Brownfield: Reporting icon-arrow-right 

When moving from a monolithic system to a micro-service architectured system, we need a different approach when dealing with transactions.

Transactions are useful:

  • They ensure data integrity.
  • They allow us to updates several records as part of one transaction.
  • If one or more updates (and/or creates) fails, we can roll the entire transaction back.

In monolithic transactions are simple. We can have one process which is updating and creating records. These records are part of the transaction; therefore, the same process can either commit the transaction or roll it back if there are any issues.

In micro-services, transactions spanning are complex because there are several processes. This means that several micro-services are involved in complete one transaction. Since our transaction is distributed along multiple micro-services, it becomes a complex procedure to observe and solve problems; therefore, it becomes complex to roll back.

For example, we can have a order being place. This process will take several micro-services working together.

If one of these micro-services fails when trying to create or update a record, we will need to rollback the entire transaction.

How to handle fail transactions:

  • Option 1: Try again later.
    • The part of the transaction that failed is put into a queue so another service can pick it up and process.
      • Transaction will eventually be completed.
      • It relies on other instances not failing with the same part of transaction.
  • Option 2: Abort the entire transaction.
    • We detect our transaction has failed, then we issue an Undo transaction to all the micro-services involved so they undo any creates or updates
      • Problems:
        • Who issue the undo transaction?
        • What happens when the undo transaction fails itself.
      • One way to overcome this problem is to use a transaction manager software.

        • This software use a two phase commit.
        • Phase 1: All micro-services involved indicates to the transaction manager if they are fine to commit to their part of the transaction.
        • Phase 2: If they are fine to commit, then the transaction manager tells all participating micro-services to commit the transaction.
        • If any of the micro-services doesn’t respond or responds with a “no to committing” then the transaction manager tells to all the participating micro-services to rollback the transaction.
        • Problem using transaction manager?
          • We are heavily dependent of it. 
          • It delays the processing of our transactions. Potential bottleneck.
          • Complex to implement.
          • More complex when we need to communicate with a monolithic system.
            • This can be accomplish with the message queue.

 

Share

Microservices: Brownfield: Migration: Database

 icon-arrow-left Microservices: Brownfield: Migration | Microservices: Brownfield: Transactions icon-arrow-right 

In this section, we are going to go over splitting the monolithic database into databases that will be used on each micro-service. In this way, each micro-service will have its own database which makes it easier to maintain and part of the whole micro-services concept.

As establish on the previous articles related with micro-services, we want to avoid shared databases. We want our micro-services to be as independent as possible. In this way, they can be independently changeable and deployable. A shared database limits us and makes our micro-services dependent.

The approach to split our monolithic database into micro-services databases is similar to split the code into bounded contexts as explained on the previous article, Microservices: Brownfield: Migration.

We split seams in the database which are related to seams in the code. In other words, we can take the tables that are related with our code and move them (or recreate them) into the new database. In our case, all the tables needed for the account functionality, will be taken from the shared database into the single database exclusive used for the account micro-service.

Note that in the process of moving from monolithic to micro-services, we may have to modify our the data layer of our monolithic system to access multiple databases.

A question may cross your mind which is, what do we do when we have a table which is linked across seams? For example, you may have a promotion which is linked to an order. So you have two services, the promotion service and the order services working together. Then, we must provide API calls which allow us to fetch the data for that relationship. In our example case between the promotion and the order, we will have the Order service fetching specific data from the Promotion service. 

Remember that we are refactoring our database into multiple database. We must worry about data referential integrity. This means that if we delete an account of a customer, for example, we might have to take care of orders related with that customer. Those orders exists in the Orders service. We would do this by calling the method in the Orders micro-service which would instruct in our example case to delete or disable specific orders related with the specific account ID that was deleted in the Account service. We must ensure that our micro-services talk to each other in order to keep the data referential integrity. 

In the case where we have static tables that are required by all micro-services, The best action is

  • Make that data into a configuration file available to all micro-services.
  • Or, have a specific micro-service just for these static tables.

The same principles apply when you have valid shared data that is read and written by multiple services. You move the data to a configuration file or you create a micro-service that can be used by the other micro-services.

 

Share