Skip to main content
Category

Tech guides

Role of Adaptive Authentication in Customer Identity and Access Management (CIAM)

By Tech guides

Adaptive authentication (AA), also known as risk-based authentication, is a subset of multifactor authentication and seeks to match the requirement for certain user credentials, with the perceived risk posed by the authentication requested. 

The factors below will be taken into consideration when developing Multi-Factor Authentication (MFA) to prove that someone or something has the right identity: 

  • Things you know (knowledge), such as a password or PIN
  • Things you have (possession), such as an OTP, tokens or certificates
  • Things you are (inherence), such as a biometric like fingerprints or voice recognition

AA is an authentication model that seeks to reduce the burden of authentication on users, providing a better experience, while on the other hand ensuring the right levels of security where and when necessary.

Issues with Multi Factor Authentication (MFA)

  • More cumbersome for the user but without awareness of the situation ie swapped SIMs, stolen phones etc
  • Adds unnecessary cost for the authentication steps, such as SMS cost or push notification cost when the situation doesn’t necessary it
  • Having more steps in the authentication flow always impacts the performance of the CIAM solution.

For example, if you are accessing an application from a public network,MFA is a great way to ensure appropriate security and authentication of the user, however, using the same MFA when the user is accessing from home or their corporate network will add unnecessary overheads to the system as well as requiring an unnecessarily cumbersome customer experience.  

What are the deciding factors for Adaptive Authentication?

  • More cumbersome for the user but without awareness of the situation ie swapped SIMs, stolen phones etc
  • Adds unnecessary cost for the authentication steps, such as SMS cost or push notification cost when the situation doesn’t necessary it
  • Having more steps in the authentication flow always impacts the performance of the CIAM solution.

For example, if you are accessing an application from a public network,MFA is a great way to ensure appropriate security and authentication of the user, however, using the same MFA when the user is accessing from home or their corporate network will add unnecessary overheads to the system as well as requiring an unnecessarily cumbersome customer experience.  

Location & Network: Where is the user when trying to access information, home, office, abroad? Is the connection via private or public network? Is the network and location known and secure?

Device: Whether the device is corporate-managed, and whether the device has had previous access.

Time: Is access being requested at unusual times, for example from the office at 3 am or during the weekend, public holidays. Mostly time will be used in conjunction with other factors such as IP information to find the appropriate authentication option. 

User’s Claims/Attributes:  Authentication is based on the permissions of the user. Here, before the authentication is done, the role of the relevant attribute from the user must be retrieved. This information can be scraped from a landing screen or as a parameter in the authentication request itself.

Analytics related decision: Collects previous events and real-time request information, detects complex event patterns and uses machine learning models to decide on the MFA method.

How Adaptive Authentication Works

The diagram below explains how AA works. It can be implemented as a single option or as a mixture of the 3 options available depending on the authentication needs based on the AA deciding factors discussed above.

Image 1 : MFA Decision – Adaptive Authentication

Use Cases

1. Location – If a user accesses the system through their known corporate network, that will prompt basic authentication, while the same user accessing the system through an external network will trigger MFA authentication eg: One Time Password (OTP)

2. Role – Users login to a system which has different types of roles such as Admin, Supervisor and Operator. The Admin and Supervisor authentication will use OTP, simply because their access and permissions present a greater risk than that of the Operator, who will have basic authentication.

3. Group Authentication – Users have a single login that authenticates them across ABC group of companies, so they can have access to multiple applications. Each company has a different Identity Provider (IDP) and the username and password they use for their own company within the group.

4. Mitra, X and Y are the companies under LMN Group recently bought Application Z. The IT department of LMN group is currently working on enabling authentication for all users of the Group to access the application Z. Propose a solution for this integration. 

  • Please note that all 3 companies have different email domains for their users and their email address is used as their username.
  • Build an integrated authentication solution to support the above scenario. 

How to Build Adaptive Authentication from WSO2 IAM

There are many popular CIAM solutions that support the AA concept in the industry. WSO2 IAM is one of the best open-source solutions, supporting AA.

The AA script in WSO2 IAM from JavaScript, the core API reference has functions and fields, refer here for more information.

  • 1. Script Function Creation 
    • Go to WSO2 IAM admin console –> Manage –>  Function Libraries –> Write the Function

Sample Script File 

utilFunction = require('utilFunction.js'); // Reference other adaptive script 
from the script 
successResponseMessage = { // define object
‘status’: 90001
‘statusMsg’: ‘External API call Success’,
};

var onLoginRequestProcess = function(requestIp) {
var networkData = utilFunction.getCorporateNetworkData(requestIp);
// API call to get some other data
httpPost(utilFunction.externalEndpoint, networkData,{
onSuccess: function(context, jsonResponse) {
Log.info(“External API call success.”);

// Instruct to execute steps defined in Authentication step configuration of SP
// Based on the response, each condition can be implemented like below
executeStep(1, {
authenticationOptions: [{
idp: ‘IDPNameAssignInTheStep’
}]
}, {
onSuccess: function (context) {
Log.info(“Success” + successResponseMessage.status );

}
});
},
onFail: function(context, jsonResponse) {
Log.info(“External API call failed!”);
},
onTimeout: function (context, jsonResponse) {
Log.info(“External API request timeout.”)
}
});
};

// How to make the method public to call from another script file or Adaptive Authentication script under the Authentication //steps of Service provider application

module.exports.onLoginRequestProcess = onLoginRequestProcess;
  • Apply the Script to the Authentication Flow
    • The Adaptive script should be applied to the Service Provider (SP) application in the WS02 IAM
    • The service provider is the entity that configures how to authenticate someone/something to the Identity Server, and the authentication steps are defined there. Adaptive Script applies under the ‘Script Based Adaptive Authentication’ of Authentication Step Configuration in the Local & Outbound Authentication configuration section.

a. Open the Service Provider → Local & outbound Authentication configuration → Authentication Step Configuration → Script Based Adaptive Authentication. Read more  :  >>

Image 2 : Sample Advanced Authentication Configuration of SP in WSO2 IAM

  • Apart from defining the dynamic authentication sequence configuration, the below functions can be achieved using adaptive script in WSO2 IAM.
    • Execute extra functions after authentication, eg authorisation, this can be an external endpoint to authorise the user, or internal validation against the role and permissions based on the user claims
    • Ability to process the user attributes after authentication eg: combining or splitting before sending them back to the application. eg: If the user’s Firstname is Kamali and the Lastname is Perera, and the calling application requires Fullname, then simply combine the Firstname and Lastname, Fullname is then sent as Kamali Perera.
    • Adding extra analytics

b. Add the script based Adaptive Authentication function as below

var authenctaionHandler = require('authenctaionHandler.js');// Script File name
var onLoginRequest = function(context) {
authenctaionHandler.onLoginRequestProcess(context); // method to execute 
for the discovery process
};

Configure other steps with the relevant local Authenticators and Federated Authenticators, see script below eg: ‘Step 1’ as 1 (Refer Image 2)

For Example: With only the stepId

executeStep(1); // This will execute the first step 
configured under Authentication step configuration

With only the stepId and eventCallbacks

executeStep(1, {

    onSuccess: function(context) {

        //Implement the flow after successfully completing the step 1

    }

});

With the stepId, options, and an empty eventCallbacks array

executeStep(1,{

    authenticationOptions:[{

        authenticator: ‘authenctaorname’ //  Execute the initialized authenticator as a
        first step

    }]},

});

Anusha Ruwanpathirana
Associate Software Architect

Using AWS Fargate to build low-cost ‘serverless microservices’ infrastructure

By Tech guides

Serverless computing – also sometimes referred to as utility computing – is a topic of growing interest in the field of cloud based solutions.

Key understandings

  • Microservices architecture style – This is a software development approach that aims to build a single whole application as a suite of small services – each small service runs in its own process and communicates with other lightweight mechanisms. Each service is built around business capabilities and is independently deployable by automated deployment tools. Centralised management of such services can be maintained at a bare minimum, and services may be written in different programming languages, and may use different data storage technologies.
  • Serverless and scalable – Serverless computing exploits the advantages offered by cloud providers such as AWS, Google or Microsoft to run solutions offsite, upon cloud computing infrastructure. Because serverless architectures are – by nature of technology – executed upon cloud infrastructures, serverless solutions also provide the benefits of instant scalability to solution owners. This means serverless microservices are able to scale process volumes up or down immediately in relation to demand of services.
  • AWS Fargate – Fargate is a web service offered by internet giant – Amazon Web Services. Fargate allows solution owners and developers to run microservices styled as ‘containerised solutions’ without the added burden of having to manage solutions infrastructure. In other words, Fargate allows solution developers to focus on designing and building applications and not have to interact with or even think about servers or clusters.

The need for ‘Serverless’

To run containers – Virtual Machines are necessary – they act as the underlying layer of infrastructure. Having to manage a hosting technology or a Virtual Machine brings with it the burdens of maintenance costs and resolving of issues that could arise when run in the long term. Examples of problematic issues are version upgrading, patch applications, regular backing up of data and log clean ups to mention a few.

Instead, if we were to use a hollow placeholder – such as a runtime environment – that is capable of running a specific container function when the same is injected into the environment, and additionally, the lifetime of the placeholder exists only for the duration of the said function’s execution – we would not have to provision virtual machines or bother about managing them. To customise a runtime environment according to our needs, requires a config specification attached to the placeholder. This way we are only charged for that specification and the runtime of the placeholder. This benefits us by eliminating the hassle – and recurring charges of – running virtual machines contentiously.

AWS Fargate – Task Definition (to specify configurations)

AWS Fargate provides for specifying the technical configurations of placeholders. This feature is known as ‘task definition’ and is used to allocate the necessary CPUs and memory to run containers. CPUs in this case are – in reality – Virtual CPUs (vCPU).

The below image displays a task definition screen that specifies 0.5Gb of memory and 0.25 of vCPU.

(Image 1 – AWS Fargate ‘Task Definition’ to specify vCPUs and Memory

AWS Fargate – Container definitions (to specify functionality)

‘Container definitions’ are used to specify the task or function that a container is required to accomplish in AWS Fargate (see image 2 below). The previously mentioned ‘Task definition’ will create the environment in which a defined container will execute. The container definition also specifies the docker image that starts up as the function.

Currently AWS Fargate supports Docker containers only (even though Docker is not the only container implementation technology out there). For the purpose of a docker image – to run the container –  we can specify a path in the Docker Hub repository or we could use an image hosted on AWS ECR (Elastic Container Registry).

(Image 2 – ‘Container Definition’ is used to specify Docker image)

Later, you may need to manually upload your preferred custom docker image to ECR prior to commencing usage. For example: an Apache Tomcat based docker container that is serving a web application that you ran once before.

AWS Fargate – Pricing

Pricing is based on the requested amount of vCPUs and memory resources for the defined task, and is billed per second. Price per vCPU is $0.00001406 per second ($0.0506 per hour) and per GB memory is $0.00000353 per second ($0.0127 per hour).

For an example if you run a single vCPU and 1GB sized container for 1 hour, your cost would only be $0.0506 + $0.0127 = $0.0633

*pricing is as at April, 2018 and is subject to change at the discretion of the respective  service providers.

Below is the structural illustration of the different configuration definitions or layers that are used to build a AWS Fargate service

(Image 3 -Layers within a Fargate based service implementation)

A service definition instructs how may simultaneous docker instances are needed to run and how to load-balance incoming traffic. For example: a web service that needs to be served with multiple instances to handle the ingress load.

The below example screenshot shows only one task being assigned to the service.

(Image 4 -A Fargate Service definition that asks for how many task definitions to run)

A cluster is nothing but a logical grouping of a related set of services and hence a set of tasks. This can be done based on your requirements to grouping services.

Benefits of serverless container services

  • Eliminates the need for EC2 management – Serverless microservice containers relieve system administrators of the headaches of managing an EC2 host. Infrastructure and system administrators won’t have to spend time and effort dealing with management activities such as restarting or stopping servers, upgrading resources manually, applying security patches and conducting health checks on the operating system.
  • Focus on application – Serverless microservices containers allow solutions developers to focus on application building. Infrastructure management and heavy lifting is handled automatically by AWS Fargate.
  • Flexible and affordable pricing model – Server Microservices containers accommodate a simple and low cost pricing model. Solutions developers need to worry only about three factors: CPU size, Memory size and the number of seconds of run-time.
  • Scalable combinations of utilities – Serverless microservices containers can be adjusted to provide just the right amount of vCPU and memory for your application. Powerful scalability features allow to scale from allocating the smallest amount of computing resources to high power capabilities.
  • Seamless auto scaling – Unlike manually adding ECS instances to handle spike loads, serverless microservices containers are designed to seamlessly auto-scale to match demand
  • Ease of use – Central management capabilities via AWS web console or CLI deliver a much simpler user experience for solutions developers.

AWS Elastic Kubernetes Services

  • Amazon Elastic Container Service for Kubernetes or Elastic Kubernetes Service (EKS) is a fully certified Kubernetes implementation in AWS cloud – and is also a fully managed service.EKS is currently (as at April, 2018) in preview mode for users (you have to register separately to get access to evaluate EKS which is subject to AWS approval based on your use cases). EKSis expected to be available with support for Kubernetes v1.10. This service will be released as a Fargate version in which you will not need to manage Kubernetes nodes and will be similar to the ECS version of Fargate with different vCPU and Memory configurations to choose from depending on the size of the workload for each Kubernetes pod.Benefits of implementing a serverless Kubernetes service:
    • Replaces the use of hosts (VMs) with Kubernetes nodes and thus, management of hosts is not required
    • the complexities of managing Kubernetes clusters are managed by AWS cloud for you
    • highly available provisioned master nodes clusters handle the background work. This means there is no need to manage kubernetes masters
    • serverless Kubernetes services are built upon vanilla Kubernetes and not on an AWS version of Kubernetes. therefore application working on standard plain Kubernetes is compatible with EKS
    • significant cost reduction due to minimal infrastructure management requirements.

Summary

Serverless technologies are increasingly gaining the attention of solutions developers because of the immense conveniences it offers in managing applications along with cost savings. Maintenance costs too are at a minimal because developers do not have to engage in dealing with virtual hosts and serverless charging models adhere to resource usage parameters.

AWS offers two docker container based clustered environment capabilities known as ECS and EKS. ECS is Amazon’s proprietary implementation of cluster management and orchestration framework. EKS on the other hand is a Kubernetes based implementation.

Both ECS and EKS offer serverless versions (EKS is said to be made available sometime during the year 2018) where the underlying infrastructure is fully managed by AWS. All that the solution developer is required to do is prepare a relevant docker image that contains specific functionalities to deploy on AWS Fargate.

The best part is that both of these services seamlessly integrate with other AWS tools to allow you to make use of other robust AWS technologies such as IAM, S3, RDS, Cloudwatch and numerous other tools.

AWS Fargate is considered a forerunner technology and is still evolving as a product in the AWS cloud suite. Since we last checked, AWS Fargate is only available for ECS platforms and limited geographically to the Northern Virginia territory, and will soon be made available to other regions as well. We think it is beneficial for solution developers to be aware of such easy-to-work-with technologies as AWS Fargate is, and we expect to see a rise in solution developers migrating future workloads to serverless platforms – of which I think AWS Fargate is currently the most prominent and powerful of them all.

Thank you for reading this Tech Guide. We hope you will also read our next tutorial so that we can help you solve some more interesting problems. Also, don’t forget to subscribe to our newsletter to stay updated with the latest in technology.

Anuradha Prasanna

Associate Architect | Mitra Innovation

Eight types of AWS storage services explained

By Tech guides

Amazon Web Services (AWS) is one of the most competent cloud service providers in the world right now. In this tech guide we explore eight of the storage services made available by Amazon Web Services (AWS). Furthermore, this article will also provide in-detail aspects of AWS storage services and their best practices.

Overview

Over the years, data storage has been diversified vastly to cater to varying needs.  Ranging from the needs of a single person to a multinational company, data storage has become a must-have factor for everyone. Starting from ‘Punch Cards’ which are used to communicate information to equipment – even before computers evolved to ‘Cloud Storage’; which is the most popular storage option currently available – data storage technologies have transformed and are still evolving day by day.

Among the hundreds of cloud service providers, Amazon Web Services (AWS) dominates the digital market and is a flexible, cost-effective, easy-to-use cloud computing platform.

This article will help understand different storage services and features available in the AWS Cloud

First, to provide context to this post, let’s take a couple of minutes to understand the following important terms:

  1. AWS Region – an AWS Region is a demarcated geographic area. Each region consists of multiple, isolated locations known as Availability Zones. Currently AWS avails 16 regions such as US East (N. Virginia), Canada (Central), EU (Ireland), EU (Paris), Asia Pacific (Singapore) and so on.
  2. AWS Availability Zone – an AWS Availability Zone is an isolated location within a region. Each region is made up by several Availability Zones. Each       Availability Zone belongs to a single region.

AWS Storage Services

Amazon Web Services (AWS) provides low-cost data storage with high durability and high availability. AWS offers storage choices for backing up information, archiving, and disaster recovery.

We have compiled a list of the main storage services available on the AWS Cloud, as follows:

  1. Amazon Simple Storage Service (Amazon S3)
  2. Amazon Glacier
  3. Amazon Elastic File System (Amazon EFS)
  4. Amazon Elastic Block Store (Amazon EBS)
  5. Amazon EC2 Instance Storage
  6. AWS Storage Gateway
  7. AWS Snowball
  8. Amazon CloudFront

Furthermore, this article will explore all of the above mentioned storage services but will exclude Amazon database services such as AWS RDS, Amazon DynamoDB etc.