Chinthaka Thennakoon

Introductions to Docker, AWS ECS, WSO2 APIM and WSO2 Enterprise Integrator

This article is the first in a series of articles relating to application containerisation using Docker, and deploying containerised applications in AWS EC2 Container Service (ECS).

As cloud computing gathers momentum, we see newer and advanced computing technologies emerge to help speed up software development processes. Newer technologies also help drastically reduce the ‘human workload’ of managing development environments by way of automating repetitive and mild-intelligent tasks – such as managing virtual environments and compute power.  This article is the first in a series of articles relating to application containerisation using Docker, and deploying in AWS EC2 Container Service (ECS).

 

Introduction

This article will attempt to provide a clear understanding of the following concepts before we further delve into any practical explanations. The concepts we will be exploring in this article are:

  1. Containerisation
  2. Docker
  3. AWS Elastic Container Service (ECS)
  4. WSO2 Application Programming Interface Manager (APIM)
  5. WSO2 Enterprise Integrator (EI).

For the purpose of this article, I shall be using two WSO2 products – WSO2 API Manager and WSO2 Enterprise Integrator – as containerised services. Both products can be considered as two separate application services that are required to communicate with each other (see fig 1).

Deployment overview

(Fig 1 – Deployment overview)

 

I will also attempt to provide a follow-up article that will act as a complete user guide to containerising applications and relating to configuring containerised products in AWS ECS. But that is for later. For now, let’s learn some basic concepts, shall we?

 

1. Containerisation

Containerisation is ideal for solutions developers who require to run applications anywhere (i.e either physical, virtual or cloud), upon any machine and utilising minimal resources, without the need to worry about a host OS.

Make no mistake, this not the same as Virtualisation. Virtualisation moves computing resources from ‘physical’ to ‘logical’.

The concept of Virtualisation is to increase logical IT resources – known as virtual systems or virtual machines – within a single physical system. Virtual machines provide for an environment with plenty of resources that most applications require to run effectively.

In essence, the concept of containerisation is to virtualise an operating system so that – multiple applications can be distributed across a single host – without the need for further virtual machines. This is achieved by providing an application with access to a single operating system kernel. All containerised applications running upon a single machine will also run on the same OS kernel.

Virtualization vs Containerization

(Fig 2 – Virtualization vs Containerization)

 

What makes containerisation better than virtualisation

Containerisation enables ‘portability’ in applications by way of virtualising CPU, memory and network resources right down to the OS level of operations.  This in turn enables operations of isolated, encapsulated systems that are kernel based.

Containerised applications work well with micro-services and distributed applications because containers operate independently of other containers – and use minimal resources from the host machine. Thus, it is possible to run applications without dependencies, as well as, without the need for an entire virtual machine per application. Containers require lesser resources in terms of compute power, storage space and memory to work efficiently.

 

2. Docker – basics and commands

Docker is one of the more popular container engines used in the computing world. Apart from Docker, there are other containerisation technologies such as CoreOS. The basic terminologies and Docker commands are as follows:

  • Image: Docker Images are a read-only template for creating containers. Usually one image is based on another image (i.e – we can build an image which is based on an ubuntu image and then install it on a web server and run our application.The same applies for configuration changes needed to run the application)

 

  • Container: Docker containers are runnable instances of an image. It is possible to create, start, stop, move, or delete a container using Docker commands. It is also possible to connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

 

  • Docker Daemon: Docker Daemons are capable of building, running and managing Docker containers within a host machine. Docker daemons are also capable of communicating with other daemons to manage Docker services.

 

  • Docker Client: Docker clients provide the much needed user interface for Docker daemon. A Docker client is capable of receiving commands from users and communicates with the Docker daemon in turn.

 

  • Docker Registries: Docker registries contain repositories of images for users to upload or download. Registries can be public or private. The public Docker registry is called the Docker Hub.

 

  • Dockerfile: A Dockerfile is a text document that contains all the commands a user needs to call on the command line to assemble an image. Docker can build images automatically by reading the instructions from a Dockerfile.

For more in-depth information about Docker and how to use, we suggest a look at the official docker documentation – here.

If you are a beginner to Docker and are not familiar with basic docker commands, I would recommend to read this (this contains important docker commands such as –  pull, push, run, delete, start, stop, etc in one page. Knowledge of these commands will be useful when I attempt to guide you through a hands-on session in my next article).

 

3. Introduction to AWS Elastic Container Service (ECS)

To help manage containerisation of applications for large scale purposes, Amazon Web Services provides a feature packed Elastic Container Service – known as ECS.  ECS is built to help solution developers easily manage scaling of containerised applications. AWS ECS can be described as – a highly scalable, high performance container orchestration service. ECS eliminates the need for solutions developers to install and operate own container orchestration software. AWS ECS also eliminates the need for solutions developers to manage scaling of virtual machine clusters or even schedule containers on those virtual machines. You may read more about AWS ECS here.

 

AWS ECS – understanding the basic components  

Once the core components of AWS ECS are understood it is easier to visualise how the components work together as one.

  • Cluster: A cluster is a logical group of AWS EC2 instances that are used to house containers.

 

  • Task definition: A task definition is a blueprint that describes how a particular docker container is expected to launch. In other words, this is a point-in-time capture of the configuration for running an image. It can also be described as – a recipe that ECS use to run tasks within a cluster. A task definition is a text file (JSON format) that describes one or more containers (up to ten) and their configurations. Requested CPU power, memory requirements, links between containers, networking, port settings and data storage requirements can be configured in a task definition.

 

  • Task: A Task is the actual container deployment service. A task may address one or more containers. Tasks are also referred to as ‘instances’.

 

  • Service: A Service is described as a ‘long running task’ within a task definition. Services such as containerised back end services are necessary to be available throughout the operational life of a task definition. In the event a task fails for any reason, Amazon ECS service scheduler replaces and launches another instance of the required task definition and maintains the desired number of tasks in the service.

A service includes a task definition, number of tasks and instructions to how the task has to be distributed. Additionally, services allow solutions developers to run tasks controlled by a load balancer to help distribute traffic evenly.

  • ECS agent: An Elastic Container Service Agent manages the state of a container within a single EC2 instance. Every instance in a cluster has its own agent. A centralised ECS control point communicates with the Docker daemon on each EC2 instance via this agent.

 

  • ECR: An Elastic Container Registry is a fully managed docker container registry. ECS can be used to store, manage and deploy container images. Amazon ECS is a private, redundant, encrypted and high-available registry service which is useful as a repository for docker images.

The following diagram will help provide a visualised understanding of the above mentioned components.

ECS Components Interaction

(Fig 3 – ECS Components Interaction)

 

Description of Fig 3

  1. Build and tag your docker image after performing all the necessary configurations and push that image to AWS ECR registry.
  2. Create a task definition by describing the docker image and container configurations that should run.
  3. Create an ECS service by defining how many tasks are required to run with the configurations (as described in step 2 and load balancer configurations).
  4. Send task starting information to the ECS agent in the cluster (single agent per EC2 instance).
  5. Pull docker images defined in the task definition (step 2)
  6. Start the number of tasks (as described in step 3) in each EC2 instance in the cluster. Each task will include all containers defined in step 2.

4. WSO2 Enterprise Integrator(EI) and API Manager

WSO2 Enterprise Integrator or Enterprise Service Bus (EI/ESB) –

WSO2 Enterprise Integrator (EI), previously known as Enterprise Service Bus (ESB) is a software architecture model – commonly referred to as middleware solution – that is used to design and implement interactions and communications between mutually interacting software applications within Service Oriented Architectures (SOA). For more on Service Oriented Architecture read this.

Common use cases are seen when organisational systems are required to communicate with several other third party applications that are different from each other. In such case, it is quite a challenge to ensure consistent communications and connectivity directly with each system individually. Message transformation and communicating with different transport protocols can be tedious. ESBs come in handy in such situations. ESBs are available from IT giants such as WSO2 IBM and Oracle.

WSO2 ESB is a popularly used ESB largely due to its capabilities and is trusted by large scale organisations across the world. WSO2 ESB promotes asynchronous message mediation, message identification and routing between applications and services. WSO2 ESB also allows messages to flow across different transport protocols such as – HTTP/S, JMS, TCP and SOAP and REST. WSO2 ESB also facilitates message transformation and allows for secure, reliable communications  and enables service chaining.

WSO2 ESB comes pre-packaged in the form of an integration profile within the WSO2 Enterprise Integrator (EI). The Enterprise Integrator tool comprises of a few other WSO2 products as profiles such as an Analytics profile (WSO2 DAS), Message Broker profile (WSO2 MB) and MSF4J profile for microservices.

The following diagram illustrates what WSO2 EI’s ESB Profile is capable of achieving.

WSO2 EI - ESB Integration Profile

(Fig 4 – WSO2 EI (ESB/Integration Profile)

WSO2 Application Programming Interface Manager (APIM) –

API Management is process of designing, publishing, documenting and analysing APIs in a secure environment. By way of an API Management tool, an organisation is able to guarantee that both – the public and private APIs that they create – are consumable and secure.

API Management solutions are made available by a number of vendors such as – Apigee, Oracle, IBM and WSO2.

WSO2 API Manager(APIM) is open source and helps create, publish, store, manage life-cycles, version management, govern and secure APIs. For more reading material click here and for more information on working with WSO2 APIM read the official WSO2 documentation here.

WSO2 APIM comprises of the following components – API Gateway (worker/manager), API Store, API Publisher, traffic manager and key manager. The following diagram illustrates how WSO2 APIM components interact with each other.

WSO2 APIM Components Interaction

(Fig 5 – WSO2 APIM Components Interaction)

 

Conclusion

Today, cloud is the new norm across nearly all industries. Companies move towards cloud solutions because of server-less infrastructure provisioning capabilities. Organisations are able to exploit such capabilities by eliminating server and data center maintenance costs. Other benefits such as reduced power consumption, costs, managed licenses, effective resolving of  location requirements to meet regulatory compliance and lack of data center lease expirations.

Application deployment can be tedious due to  high resource demand and applications are required to run in any given environment. Containerised applications are the solution for these issues thanks to their scalable nature.

Docker is the most popular containerization engine that developers use, largely due to its high-performance and widespread compatibility with other technologies. AWS ECS provides a highly scalable, high-performance dockerized container orchestration cloud service.

In my next post, I hope to write more about containerization with AWS ECS and how to implement two WSO2 applications within a containerized environment. This should serve as a guide for you to implement the above concepts in a practical scenario.

Stay tuned!

 

Quick links –

Chinthaka Thennakoon

Software Engineer | Mitra Innovation