Skip to main content
Category

Blogs

Fishing in data lakes to derive valuable business insights

Fishing in data lakes for business insights

By Blogs

Following EMC’s research, it is anticipated that by the year 2020, the amount of data in our digital universe is expected to grow from 4.4 trillion GB in 2013 to 44 trillion GB. This could mean that the ever increasing pool of invaluable data will give rise to more opportunities for organisations to understand customer experience, derive actionable insights and generate higher value from vast quantities of accumulated data.

To cater to ever increasing needs for data storage, enterprise data storage facilities have undergone a technological shift – from data warehouses to data lakes – which proves attractive to organisations because of its increased computing power and data storage capacity.

With the explosion of the concept of data lakes during the past five years, it is important to carefully observe how enterprises are going to store data from disparate data sources, and also ensure the same data storage facilities will not end up as data dumping grounds that lead to siloing of data.

An interesting feature of data lakes is that it acts as a reservoir of valuable data for enterprises. This enables rapid ingestion of data in its native format even before data structures and the business requirements have been defined for its use.

The value that is generated for enterprises lies in – having access to this vast amount of data from disparate data sources, the ability to discover insights, visualisation of  large volumes of data and also, importantly – the ability to perform analytics on top of this data. All of this used to be more complicated both, to acquire such vast quantities of as well as to perform the above mentioned functions.

It should be noted that analytics cannot be derived from raw data alone. Data first needs to be integrated, cleansed, transformed, metadata managed and properly governed. This way data lakes can be harnessed for control of data from disparate sources, in diverse formats to correlate – thus resulting in business insights that increases value to market.

Praedictio Data Lake, engineered by Mitra Innovation provides a competitive advantage with its comprehensive data lake solution consisting of data cataloguing, visualizations of data in the data lake, ETL (extract, transform and load) and Data Governance. The solution is built on AWS technologies such as S3 as the data storage technology, AWS Glue for data cataloguing and ETL and AWS Quick sight for insight generation.

Praedictio leverages the inherent advantages of AWS Analytics capabilities such as the non-requirement of server management (because AWS takes care of heavy lifting work of server deployment and migration), pay as you use model where users do not have to pay upfront fees or annual fees, and scalability where usage can scale from few users to tens of thousands of users. The most attractive feature of Analytics is SPICE (Super-fast, Parallel, In-memory Calculation Engine) where users can run interactive queries of large data sets and extract rapid responses.

In addition to supporting analytics for previous data, Praedictio also delivers predictive analytics using machine learning technologies.

Machine learning enables the advantage of shorter times to receive faster insights. By way of leveraging AWS Machine Learning capabilities, Praedictio eliminates the barriers of using machine learning for developers and provides easy access to trained models.

Nevertheless, it is noteworthy to keep in mind – the importance of data management within a data lake. If not properly catalogued and governed, the opportunity for deriving business insights would be far less.

Follow us as we explore the newest frontiers in ICT innovation, and we apply such technologies to solving real world problems faced by enterprises, organisations and individuals!

Data lake governance

Data Lake Governance

By Blogs

A data lake is a central storage facility that houses an organisation’s structured, unstructured and semi-structured data. In most cases, the data that is ingested will be strewn all over. As a data lake accumulates data over the years, this could lead to ‘data swamping’; where users will no longer know where their data is stored or what transformations took place to the data that was ingested. Such a situation will lead to data lying in isolation, thus losing the whole point of storing data.

This is where data lake governance comes into place. Data governance is a pre-defined data management process that an organisation implements to ensure that high-quality data is available throughout the whole data life cycle. However, there is a void in semantic consistency and governance of metadata in the current implementations of data lake solutions (Gartner, 2017).

There are a number of benefits for implementing data governance within a data lake.

  • Traceability – helps understand the entire life cycle of the data residing in the data lake (this also includes metadata and lineage visibility)
  • Ownership – helps organisations to identify data owners should there be questions about the validity of data
  • Visibility –  helps data scientists swiftly and easily recognise and access the data they are looking for, amidst large volumes of structured, semi structured and unstructured data
  • Monitored health – helps ensure that data in the data lake adheres to pre-defined governance standards
  • Intuitive data search – helps users to find and ‘shop’ for data in one central location, using familiar business terms and filters, that narrow results to isolate the right data.

Praedictio Data Lake

Praedictio, an Amazon Web Services powered data lake solution developed by Mitra Innovation, offers all of the business benefits discussed as above. One of the key attractions of the Praedictio Data Lake lies in its visualisation component which features a powerful three-fold visualisation of the data lake, as follows:

– Data lineage visualisation

– Source and destination visualisation

– Graph visualisation of data in the data lake

Furthermore, Praedictio Data Lake is equipped with a dashboard component which delivers visualisation of the health of the data lake to users, along with an alerting mechanism when pre-defined thresholds are met.

Another key feature of a data lake is the ability to catalogue data; which is based on  meta data that relates to the data residing in the Data Lake. This helps users easily search for the necessary data and also helps users determine which data is fit to use—and which data needs to be discarded because it is incomplete or irrelevant to the analysis at hand. Moreover, it also shows the schema changes of the underlying data over time too.

Key Take away

Data Lakes store data in their native formats. The data structure and requirements are not defined until the data is needed. Such data in its native format is gibberish and cannot be used to derive  business insights to gain a competitive edge. This makes it important that an organisation adds policy driven processes, thus adding context to the underlying data, making it more efficiently and effectively used by the stakeholders.

Hence, it is evident that data governance policies and data cataloguing is of great importance for higher value-generation, making actionable insights and informed decisions, as well as to eliminate the current drawbacks of data silos in data lakes.

Follow us as we explore the newest frontiers in ICT innovation, and we apply such technologies to solving real world problems faced by enterprises, organisations and individuals.

7 best practices to follow when designing scalable IoT architecture using AWS IoT

Scalable architecture using AWS IoT

By Blogs

Internet of Things (IoT) systems handle billions of source devices and trillions of data points flowing into a central platform. The growth of data is massive and IoT systems are required to scale in order to manage data explosions.

Key functions such as Inbound data collection within a central platform, real-time data analytics, scalable storage space and offline analytics – are all required to scale seamlessly for an IoT solution to successfully scale. There are IoT platforms that help manage requirements such as high scalability and security, and eliminates the task of managing such challenges by business owners.

The architecture of an IoT solution largely depends on the requirements of a given system, load and the data involved and the IOT platform that is in use. Today we discuss a few of the best practices that we follow to help us achieve the desired scalability of a solution using AWS IOT.

(Video: What is AWS IoT – take a first look at AWS IoT)

1.Design to operate at scale reliably from day one

As we saw, an IoT system is expected to deal with high-velocity and high-volume data captured by device sensors. The flood of incoming data might arise because of sudden growth in business, consistent growth over time, or due to a malicious attack. In any case, the system should be geared to face this from day one.

While taking care of the scalability aspects, the IoT system should also make sure that the data received is reliably processed. The best approach is to queue or buffer data as soon as it enters the system to ensure reliability.

2. Use AWS IoT Rules to route large data volumes into the rest of AWS

Consuming data from device topics directly by a single instance (without fault tolerance) prevents systems from being leveraged to achieve full potential scalability. Furthermore, this approach limits the availability of the system. On the other hand, the AWS IoT Rules Engine is designed to connect to endpoints external to the AWS IoT Core in a scalable way.

Additionally, AWS IoT Rules Engine facilitates the triggering of multiple different actions in parallel once the data is captured by the IoT system. This gives the system the ability to fork data into multiple datastores (receiving systems) simultaneously. If the receiving system is not designed to work as a single point of entry into the system, the data must be buffered or queued into a reliable system before being sent to the target systems – to provide the system with the ability to recover in the event of a subsequent failure.

3. Invest in automated device provisioning at earliest

When a successful business grows, the number of devices connecting to the IoT system is likely to increase as well. This makes manual processes such as – device provisioning, bootstrapping software, security configurations, device registration and upgrades – not feasible anymore. Hence, minimising human interaction in the initialisation process is important to save time and increase efficiency.

Designing built-in capabilities within the device for automated provisioning and, leveraging the proper tools that AWS provides – to handle device provisioning and management – allows systems to achieve desired operational efficiencies easily and at the cost of minimal human intervention

4. Manage IoT data pipelines

The tremendous amount of data captured as part of the IoT solution will need to go through data processing channels to make information out of them. This phase of ‘processing’ might involve many system/components in the process. The path that data travels through are referred to as data pipelines. Thus, data pipelines have to be designed to handle huge loads without compromising on performance.

In addition to this, architects should keep in mind that all of the data does not necessarily require all of the processing power that the system facilitates. During the design phase, architects should determine the data pipeline for each type of data.

5. Adopt scalable architecture for custom components

Adopt scalable architecture for external components that are added into the solution, and ensure that these components do not turn into performance bottlenecks that – in turn affect the entire solution.Additionally, these components must be designed to accommodate system expansions easily.

Adopting microservice like architecture does not only embed scalability within the system, but also provides the flexibility of replacing components that could possibly affect performance.

6. Adopt multiple data storage technologies

IoT systems deal with high volume, high velocity, and large varieties of data. A single IoT system might not be able to efficiently store all of the data in one type of datastore. Architects should choose from the most appropriate datastores and suitably supporting technologies for different types of data – to achieve the desired capacity and scalability of the system by increasing efficiencies and throughput.

7. Filter data before processing

All of the data that is directed towards an IoT system may not require processing in real time and such data can be filtered out. Filtering can be facilitated by using AWS Greengrass. Architects can select data that is not required by the system immediately and design the system to accept this data in chunks – when the cloud platform demands it. This way, system capabilities are used optimally, allowing more room for scaling.

Conclusion

In the forthcoming years, IoT is expected to instrumental in managing exponential growth. In turn, as adoption of fully integrated IoT systems grow, the numbers of devices being added to systems are also expected to grow exponentially..

Thus it is important to implement systems which are scalable when required, whilst also ensuring reliability and security. AWS IoT platform is a cloud IoT platform which offers all the qualities that a modern architecture may demand.

Implementing a scalable as well as reliable system from the day one will save cost and effort from the Architects and Business owners.

Even though AWS IoT platform is designed to provide automated scalability, solution architects should ensure that they follow certain best practices to make sure that systems integrations are planned and implemented in a scalable manner – to provide a comprehensive and scalable solution.

Thank you for reading our latest Mitra Innovation blog post. We hope you found the lessons that we learned from our own startup story interesting, and you will continue to visit us for more articles in the field of computer sciences. To read more about our work please feel free to visit our blog.

Sangeetha Navaratnam
Software Architect | Mitra Innovation

IoT with AWS

By Blogs

7 best practices to follow when designing scalable IoT architecture using Amazon Web Services (AWS) IoT

 

Internet of Things (IoT) systems handle billions of source devices and trillions of data points flowing into a central platform. The growth of data is massive and IoT systems are required to scale in order to manage data explosions.

Key functions such as Inbound data collection within a central platform, real-time data analytics, scalable storage space and offline analytics –  are all required to scale seamlessly for an IoT solution to successfully scale. There are IoT platforms that help manage requirements such as high scalability and security, and eliminates the task of managing such challenges by business owners.

The architecture of an IoT solution largely depends on the requirements of a given system, load and the data involved and the IOT platform that is in use. Today we discuss a few of the best practices that we follow to help us achieve the desired scalability of a solution using AWS IOT.

1.Design to operate at scale reliably from day one

As we saw, an IoT system is expected to deal with high-velocity and high-volume data captured by device sensors. The flood of incoming data might arise because of sudden growth in business, consistent growth over time, or due to a malicious attack. In any case, the system should be geared to face this from day one.

While taking care of the scalability aspects, the IoT system should also make sure that the data received is reliably processed. The best approach is to queue or buffer data as soon as it enters the system to ensure reliability.

2. Use AWS IoT Rules to route large data volumes into the rest of AWS

Consuming data from device topics directly by a single instance (without fault tolerance) prevents systems from being leveraged to achieve full potential scalability. Furthermore, this approach limits the availability of the system. On the other hand, the AWS IoT Rules Engine is designed to connect to endpoints external to the AWS IoT Core in a scalable way.

Additionally, AWS IoT Rules Engine facilitates the triggering of multiple different actions in parallel once the data is captured by the IoT system. This gives the system the ability to fork data into multiple datastores (receiving systems) simultaneously. If the receiving system is not designed to work as a single point of entry into the system, the data must be buffered or queued into a reliable system before being sent to the target systems – to provide the system with the ability to recover in the event of a subsequent failure.

3. Invest in automated device provisioning at earliest

When a successful business grows, the number of devices connecting to the IoT system is likely to increase as well. This makes manual processes such as – device provisioning, bootstrapping software, security configurations, device registration and upgrades – not feasible anymore. Hence, minimising human interaction in the initialisation process is important to save time and increase efficiency.

Designing built-in capabilities within the device for automated provisioning and, leveraging the proper tools that AWS provides – to handle device provisioning and management – allows systems to achieve desired operational efficiencies easily and at the cost of minimal human intervention

4. Manage IoT data pipelines

The tremendous amount of data captured as part of the IoT solution will need to go through data processing channels to make information out of them. This phase of ‘processing’ might involve many system/components in the process. The path that data travels through are referred to as data pipelines.  Thus, data pipelines have to be designed to handle huge loads without compromising on performance.

In addition to this, architects should keep in mind that all of the data does not necessarily require all of the processing power that the system facilitates. During the design phase, architects should determine the data pipeline for each type of data.

5. Adopt scalable architecture for custom components

Adopt scalable architecture for external components that are added into the solution, and ensure that these components do not turn into performance bottlenecks that – in turn affect the entire solution.Additionally, these components must be designed to accommodate system expansions easily.

Adopting microservice like architecture does not only embed scalability within the system, but also provides the flexibility of replacing components that could possibly affect performance.

6. Adopt multiple data storage technologies

IoT systems deal with high volume, high velocity, and large varieties of data. A single IoT system might not be able to efficiently store all of the data in one type of datastore. Architects should choose from the most appropriate datastores and suitably supporting technologies for different types of data – to achieve the desired capacity and scalability of the system by increasing efficiencies and throughput.

7. Filter data before processing

All of the data that is directed towards an IoT system may not require processing in real time and such data can be filtered out. Filtering can be facilitated by using AWS Greengrass. Architects can select data that is not required by the system immediately and design the system to accept this data in chunks – when the cloud platform demands it. This way, system capabilities are used optimally, allowing more room for scaling.

 

Conclusion

In the forthcoming years, IoT is expected to instrumental in managing exponential growth. In turn, as adoption of fully integrated IoT systems grow, the numbers of devices being added to systems are also expected to grow exponentially..

Thus it is important to implement systems which are scalable when required, whilst also ensuring reliability and security. AWS IoT platform is a cloud IoT platform which offers all the qualities that a modern architecture may demand.

Implementing a scalable as well as reliable system from the day one will save cost and effort from the Architects and Business owners.

Even though AWS IoT platform is designed to provide automated scalability, solution architects should ensure that they follow certain best practices to make sure that systems integrations are planned and implemented in a scalable manner – to provide a comprehensive and scalable solution.

Sangeetha Navaratnam
Software Architect | Mitra Innovation

CI - CD Automation with Jenkins

CI - CD Automation with Jenkins

By Blogs

Software teams face a dilemma when they move away from traditional waterfall methods to agile methods of software development. Waterfall methods mean that software teams are required to iteratively build and integrate entire lists of system components and ensure that the components are tested well.

Even though waterfall methods are a manual process, it can be handled with fewer complexities due to the lower rate of iterative cycles.

With agile methods in play, teams follow a continuous – build, integrate and test – roadmap to continuously expand system functionalities instead of simply building components separately and assembling them together at the end.

Understanding Continuous Integration (CI) –

Multiple engineers work to develop new systems one component at a time. Every time a new component or feature is completed, the finished component will be added to the existing code. This is what is referred to as a ‘build’.

For instance, consider the following example;

A movie production team creates an initial video clip and completes the whole sequence by continuously adding new frames. Everytime a new sequence of frames are added, the producers play the movie from the beginning to check whether the whole movie still makes sense. If it does, then we can safely say, we do have a ‘green build’.

Now, let’s say an engineer adds a new piece of code to the green build. However, when the test is run (just like re-running the whole movie sequence), it turns out that the new component doesn’t fit in very well. The component doesn’t integrate and thus results in a ‘red build’. Now, the engineer who created that particular faulty piece of code has to fix it.

Previously engineers wouldn’t have wanted everyone to know that a faulty piece of code had been added to the system build. Today, however, it is the opposite.

Thanks to Continuous Integration practices the development team is informed as soon as a faulty piece of code is added to the build. Developers make use of ‘red’ and ‘green lights’ to maintain build status visibility across the team.

A red light indicates that no new piece of code is to be added until a green light is indicated.

 

In the case of the above example, let’s assume the team consists of 10 developers. A team of 10 developers will be adding or changing code at a rate of 50 times a day per developer. This adds up to nearly 500 builds per day, per project. This will also include the following activities for each day the project is in development:

  • 500 rounds of source code downloads
  • 500 rounds of source code compilations
  • 500 rounds of artefact deployment
  • 500 rounds of testing (i.e unit testing, regression testing)
  • 500 rounds of build pass/fail notifications

This is a point where automation is called to task.

 

Understanding Continuous Deployment/Continuous Delivery (CD) –

CD is abbreviated for both; Continuous Deployment and Continuous Delivery. It must be emphasised that the two expansions of CD are mutually exclusive and do not mean the same thing.

Allow me to elaborate in the following example;

There are two types of television programs; the news, and a cookery show. Unlike the cookery show, the news program will have to go through a considerable number of checks prior to broadcast. Similarly, certain software domains accommodate the freedom to perform a direct release into production environments, whereas other software domains are required to make a business decision (prior approval) in order to proceed with a release into production environment.

 

Continuous deployment and continuous delivery both mean automating deployment and testing of a software on a regular basis to a production-like environment. However, the differences are as follows:

Continuous Deployment  —  A process which is fully automated and includes production deployments (no human interaction after code – commit).

Continuous Delivery  —  An almost fully automated process where production deployments occur with a business decision.

 

When following CI-CD methodologies, a pipeline typically breaks the software delivery process in to various stages. The last stage will provide feedback to the development team. Even though there isn’t a defined standard for creating a pipeline, we are able to identify the following stages:

  • Build automation
  • Continuous integration
  • Deployment automation
  • Test automation

A project may have different pipelines or even different types of pipelines for different purposes.

Let’s take a look at our options.

This leaves us with a requirement for a powerful tool which is capable of the following, using pipelines:

  • Automate code builds
  • Build artefacts
  • Perform deployments
  • Run tests
  • Provision above into multiple environments (Dev, SIT, UAT/ Prep/ Prod)

This leads us to several different engines such as:

  • CircleCI
  • Eclipse Hudson
  • GitLab CI
  • JetBrains TeamCity
  • ThoughtWorks GoCD
  • ThoughtWorks Snap
  • Jenkins

Jenkins –

Jenkins is an open CI-CD tool written using Java programming language. Since Java is platform independent, Jenkins inherits the same and will run on most platforms available today.

Jenkins commenced as a fork off the original Hudson project and has evolved since day one. Today it is among the top contenders in DevOps tool-chains because it is free, open source and modular.

Even though the functionalities of Jenkins – straight out of the box – is very limited, there are more than 1,000 plugins which enhance capabilities of Jenkins far ahead of most commercial or Foss tools.

Jenkins interface

(Jenkins interface)

Coming back to the original idea of choosing the CI-CD tool for our software projects here are some ways in which Jenkins can match our requirements,

  • Automate code checkouts from repository
    • Supports almost all of the existing source code repositories and can expect to support upcoming ones as well (i.e: Mercurial, Subversion, Git, CVS, Perforce, Bzr, Gerrit, Monotone, Darcs, etc.)
  • Automate the build
    • Supports most of the build automation tools available (i.e: Command-line, Maven, Ant, Gradle, etc.)
  • Every commit should build on an integration machine
    • Supports by polling as well as providing listeners to trigger builds by the SCM (i.e: Poll SCM feature, Git Hooks, etc.)
  • Make the build self-testing
    • Supports most of the unit testing tools/platforms through number of plugins (i.e: JUnit, NUnit, etc.)
  • Test in a clone of the production environment
    • Supports most of the test tools/platforms through number of plugins (i.e: JMeter, SoapUI, Selenium, Cucumber, etc.).
  • Make it easy for anyone to get the latest executable version
    • Supports by maintaining build execution history and through number of plugins (i.e Build history, Version Number Plugin, etc.).
  • Everyone can see what’s happening
    • Supports through the simplest easy-to-understand UI and build statues (i.e: Green, Red, Gray build statues, Blue Ocean project, etc.).
  • Automate deployment
    • Supports automation as out of the box product, build pipelines and variety of plugins (i.e: Cron Jobs, Build pipeline plugin, etc)

 

Conclusion –

It is not easy to emphasise enough on the importance of orchestrating, automating and streamlining development processes. Deployments themselves being tested over and over will provide for almost complete confidence in the system being built.

CI-CD is central to DevOPs, and a successful DevOps implementation may contain implications that extend beyond IT and to business itself. Continuous improvements of software continuously improves products and services.

Mitra-Celebrating six years of success

Celebrating six years of success with a party in colombo

By Blogs
Mitra Innovation recently celebrated its sixth year of success, by throwing a party for our employees.

The party was attended by the three co-founders of Mitra Innovation – Ashok Suppiah (CEO), Derek Bell (COO) and Dammika Ganegama (MD) – as well as most the company’s Directors, and its 180-strong employee force, along with their husbands, wives or partners who also attended. International team members flew out to Sri Lanka to join the celebration as well.

The party was held at the Jaic Hilton in Colombo, and the theme was ‘Tropical Fiesta’. Mitra Innovation enjoyed an amazing night including a dinner, a dance and a Calypso band.

Mitra Innovation started from humble beginnings in 2012 with a skeleton staff, one office and two clients. Because the party has now grown to a global organisation with an employee headcount of 180, key clients including Capital Alliance, Ramsay Healthcare and Travis Perkins, and five offices – three in Sri Lanka, an HQ in London, an additional office in the UK– the party was thrown as a celebration of success.

Ashok Suppiah says: “Mitra Innovation has come a long way since we began trading in 2012, and a huge part of this success is due to our passionate, dedicated and driven employees, both in the UK and Sri Lanka. We wanted to throw a party to say thank you to all our employees for their hard work and effort. It was a great event!

Julie Pease

Director – Marketing | Mitra Innovation

What is WSO2 Middleware?

By Blogs

What is WSO2?

WSO2 are a global, open-source, middleware supplier. They have crafted an enterprise platform and methodology that enables systems, such as CRM and accounting systems, to communicate and share data. Mitra Innovation uses middleware extensively and this article explains what it is, the powerful impact it can have, and why we recommend WSO2 middleware to our customers again and again.

What Is Middleware?

As the name suggests, middleware sits at the heart of your platform or enterprise and enables different types of software systems to ‘talk’ to each other when they might not necessarily speak the same language. Middleware gives us the ability to take discrete, modularised applications and use them again and again with an array of different software solutions on different platforms, making development more efficient and cost effective. In a world where we are more connected than ever before, and with new applications and software being developed all the time, middleware provides the vital connectivity between systems and services, enabling the seamless transfer of information from system to system when needed, regardless of platform, devices in use, location, or time.

WSO2 Middleware

WSO2 is the only open-source, cloud-ready, middleware platform and methodology available in the market, meaning it is free to use and can be customised without restriction. The nature of open-source fosters rapid development and innovation in scalable environments. In a word, it gives you control. Control over development, control over budgets, and control over infrastructure.

WSO2 is seamless and secure, encouraging rapid and innovative development customised to the needs of the business. Being cloud-based means developers can work collaboratively and rapidly in on-demand environments, learning from each other as they go. This is important in competitive industries where the ability to get software products to market swiftly can mean the difference between success and failure.

WSO2’s Range of Modules include:

  • The Enterprise Service Bus (ESB) is a lightweight, high performance, comprehensive connectivity tool. WSO2 ESB effectively addresses integration standards and supports all integration patterns, enabling connectivity between systems and business applications
  • The WSO2 Internet of Things (IoT) server is a complete solution that enables device manufacturers and enterprises to connect and manage their devices, build apps, manage events, secure devices and data, and visualise sensor data in a scalable manner
  • The API Manager combines easy, managed API access with full API governance and analysis
  • The Comprehensive Identity Server is a central control point that connects and manages thousands of users with multiple identities across applications, APIs, the cloud, mobile, and Internet of Things devices, regardless of the standards on which they are based

 

WSO2’s Proven Track Record:

WSO2’s client list speaks for itself – eBay, O2, Dialog Axiata are a few.

Mitra have used it on multiple successful projects, namely:

Kraydel: A young tech company who took on the challenge of building an innovative new assisted living service that enables elderly people to live independently in their own homes. Using a smartphone app, a wrist strap, and a base station, Kraydel provides real-time updates for relatives checking on elderly loved ones. By using WSO2, Kraydel’s journey to market was accelerated, and costs were kept to a minimum. Mitra helped ensure the platform was ‘enterprise ready’, being highly scalable, and able to integrate to any external providers.

CapGemini: Mitra provided WSO2 expertise and resources to CapGemini for two of their key UK based WSO2 clients. 

Capital Alliance: A leading, frontier market investment bank, offering a comprehensive selection of integrated investment and capital market solutions to a diverse group of clients including financial institutions, family-run corporations, and high-net-worth individuals.  Mitra helped the bank find the best CRM solution and used the WSO2 Enterprise Service Bus (ESB) to allow a single point of creation and update for each customer record, ensuring a single point of truth.  Services such as logging, security and audit functionalities are native WSO2 services and were developed as platform capabilities.  

WSO2 Telco: Mitra partnered with WSO2 to service six large Indian mobile providers.

Mitra Innovation became a Premier Certified Integration Partner of WSO2 by helping our customers choose the right WSO2 products for their business, then designing, developing and deploying elegant, seamless, cost effective and fully supported solutions. WSO2 is a powerful ally in a hyper-connected world and Mitra can harness its power for your business. 

Get in touch

If you’re interested in finding out more about how WSO2 middleware can transform your business, please get in touch with us at innovate@mitrai.com

Fraser Bell
Digital Marketing | Mitra Innovation