Your address will show here +12 34 56 78
2023 Blog, Blog, Featured, Feature Blog

As universities deal with the challenging situation of growing in Post-COVID era there is need for leveraging digital transformation for their computing assets, distributed workforce across multiple campuses, global students and innovative learning & research programs. This requires a technology led program to make education frictionless by leveraging cloud based solutions in a pre-packaged model covering University IT, Learning Needs and Research Computing. Working closely with the AWS partnership in trying to make Digital Learning frictionless, Relevance Lab is bringing a unique new concept to the market of University in a Box, that extends a self-contained Cloud Portal with basic applications to power the needs of a university. This new, radical and innovative concept is based on the idea of a school, college and university going from zero (no AWS account) to cloud native in hours. This enables the Cloud “Mission with Speed” for a mature, secure and comprehensive adoption very fast.

A typical university starting on their cloud journey needs a self-service interactive interface with user logins, tracking and offering the deployed products, provide actions for connectivity after assets are deployed, ability to have lifecycle interactions in UI of Cloud Portal with no need to go to the AWS Console and with a comprehensive view of cost and budgets tracking.

The key building blocks for University In A Box comprise the following:

  • University Catalog – Cloud Formation Templates useful to Higher Education packaged as Service Catalog Products
  • Self-Service Cloud Portal for University IT users to order items with security, governance and budget tracking
  • Easy onboarding model to get started with a hosted option or self-managed instances of Cloud Portal

Leverage existing investments in AWS and standard products the foundational pieces includes a Portfolio of useful software and architectures often used by universities.

  • Deploy Control Tower
  • Deploy GuardDuty
  • Deploy Security Hub
  • Deploy VPC + VPN
  • Deploy AD Extension
  • Deploy Web Applications SSO, Shibboleth, Drupal
  • Deploy FSx File Server
  • Deploy S3 Buckets for Backup Software
  • Deploy HIPAA workload
  • Deploy Other solutions as needed, Workspaces, Duo, Appstream, etc
  • WordPress Reference Architecture
  • Drupal Reference Architecture
  • Moodle Reference Architecture
  • Shibboleth Reference Architecture

How to Setup and Use University in a Box?
The RLCatalyst Cloud Portal solution enables a University with no existing Cloud to deploy a self-service model for internal IT and consume standard applications seamlessly.

Steps for University Specific Setup Time Taken (Approx)
A new University wants to enable core systems on AWS Cloud and the Root account is created 0.5 Hours
Launch Control Tower and Create Core OU & University OU 1.5 Hours
User and Access Management, Account Creation, Budget Enablement 1 Hour
Network Design of the University Landing Zone (Creation + Configuration) 1.5 Hours
Provision of basic assets (Infra & Applications ) from the standard catalog 1 Hour
Enable Security and Governance (Includes VA, PM, Security Hub) 1.5 Hours
User Training and Handover 1 Hours

The following diagram explains the deployment architecture of the solution.

University Users, Roles and Organization Planning
Planning for university users, roles and organizations requires mapping to existing departments, IT and non-IT roles and empowering users for self-service without compromising on security or governance. This can vary between organizations but common patterns are encountered as explained below.

  • Common Delegation use cases for University IT:
    • Delegate a product from a Lead Architect to Helpdesk, or a less skilled co-worker
    • Delegate a product from Lead Architect or Central IT, to another IT group, DBA team, Networking Team, Analytics Team
    • Delegate a product to another University Department – Academic, Video, etc
    • Delegate a product to a researcher or faculty member

Setup Planning Considerations on Deployment and Onboarding

          Hosting Options
        • Option:1 – Dedicated Instance per Customer
        • Option:2 – Hosted Model, Customer brings their AWS account
        • Option:3 – Hosted Model, RL (Relevance Lab) provides a new AWS account
        • Initial Catalog Setup
        • Option:1 – Customer has existing Service Catalog
        • Option:2 – A default Service Catalog items are loaded from a standard library
        • Option:3 – Combination of above
        • Optimizing Setup parameters and Catalog binding for ease of use
        • Option:1 – Customer fills up details based on best practices and templates provided
        • Option:2 – RL sets up the initial configuration based on existing parameters
        • Option:3 – RL as part of new setup, creates an OU, new account and associated parameters
        • Additional Setup considerations
        • DNS mapping for Cloud Portal
        • Authentication – Default Cognito with SAML integration available
        • Mapping users to roles, organizations/projects/budgets

        • Standard Catalog for University in a Box Leverages AWS Provided Standard Architecture Best Practices
          The basic setup leverages AWS Well Architected framework extensively and builds on AWS Reference Architecture as detailed below. Sharing a sample Products Preview List based on AWS Provided University Catalog under Open Source Program.

          University Catalog Portfolio Portfolio of useful software and architectures often used by colleges and universities.
          WordPress Product with Reference Architecture This Quick Start deploys WordPress. WordPress is a web publishing platform for building blogs and websites. It can be customized via a wide selection of themes, extensions, and plugins. The Quick Start includes AWS Cloud Formation templates and a guide that provides step-by-step instructions to help you get the most out of your deployment. This reference architecture provides a set of YAML templates for deploying WordPress on AWS using Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), Auto Scaling, Elastic Load Balancing (Application Load Balancer), Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache, Amazon Elastic File System (Amazon EFS), Amazon CloudFront, Amazon Route 53, Amazon Certificate Manager (Amazon ACM) with AWS Cloud Formation.
          Scale Out Computing Product Amazon Web Services (AWS) enables data scientists and engineers to manage their scale-out workloads such as high-performance computing (HPC) and deep learning training, without having extensive cloud experience. The Scale-Out Computing on AWS solution helps customers more easily deploy and operate a multiuser environment for computationally intensive workflows such as Computer-Aided Engineering (CAE). The solution features a large selection of compute resources, a fast network backbone, unlimited storage, and budget and cost management directly integrated within AWS. This solution also deploys a user interface (UI) with cloud workstations, file management, and automation tools that enable you to create your own queues, scheduler resources, Amazon Machine Images (AMIs), and management functions for user and group permissions. This solution is designed to be a production ready reference implementation you can use as a starting point for deploying an AWS environment to run scale-out workloads, enabling users to focus on running simulations designed to solve complex computational problems. For example, with the unlimited storage capacity provided by Amazon Elastic File System (Amazon EFS), users won’t run out of space for project input and output files. Additionally, you can integrate your existing LDAP directory with Amazon Cognito to enable users to seamlessly authenticate and run jobs on AWS.
          Drupal Reference Architecture Drupal is an open-source, content management platform written in the PHP server-side scripting language. Drupal provides a backend framework for many enterprise websites. Deploying Drupal on AWS makes it easy to use AWS services to further enhance the performance and extend functionality of your content management framework. This reference architecture provides a set of YAML templates for deploying Drupal on AWS using Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), Auto Scaling, Elastic Load Balancing (Application Load Balancer), Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache, Amazon Elastic File System (Amazon EFS), Amazon CloudFront, Amazon Route 53, Amazon Certificate Manager (Amazon ACM) with AWS Cloud Formation.
          Moodle Reference Architecture Moodle is a learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalised learning environments. This repository consists of a set of nested templates which deploy a highly available, elastic, and scalable Moodle environment on AWS. Moodle is a learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalized learning environments. This reference architecture provides a set of YAML templates for deploying Moodle on AWS using Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), Auto Scaling, Elastic Load Balancing (Application Load Balancer), Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache, Amazon Elastic File System (Amazon EFS), Amazon CloudFront, Amazon Route 53, Amazon Certificate Manager (Amazon ACM) with AWS Cloud Formation. This architecture may be overkill for many Moodle deployments, however the templates can be run individually and/or modified to deploy a subset of the architecture that fits your needs.
          Shibboleth Reference Architecture with EC2 This Shibboleth IdP reference architecture will deploy a fully functional, scalable, and containerized Shibboleth IdP. This reference architecture includes rotation of IdP sealer keys, utilizing AWS Secrets Manager and AWS Lambda. In addition, the certificates that are part of the IdP as well as some of the LDAP settings (including the username/password) are stored in AWS Secrets Manager. This project is intended to be a starting point for getting the Shibboleth IdP up and running quickly and easily on AWS and provide the foundation to build a production ready deployment around. Be aware that if you do delete the stack, it will delete your CodeCommit repository so your customizations will be lost. Therefore, if you intend to use this for production, it would be a good idea to make a copy of the repo and host it in your own account and take precautions to safeguard your changes.
          REDCap on AWS Cloud Formation This repository contains AWS Cloud Formation templates to automatically deploy a REDCap environment that adheres to AWS architectural best practices. In order to use this automation, you must supply your own copy of the REDCap source files. These are available for qualified entities at Once you have downloaded your source files then you can follow the below instructions for deployment. In their own words – REDCap is a secure web application for building and managing online surveys and databases. While REDCap can be used to collect virtually any type of data,including 21 CFR Part 11, FISMA, and HIPAA-compliant environments, it is specifically geared to support online or offline data capture for research studies and operations.

          University in a Box is a powerful example of a specific business problem solved with leverage of Cloud integrated with existing customer specific use cases and easy deployment options to save time, money and achieve quick maturity.

          For Universities, colleges and schools trying to use AWS Cloud infrastructure, applications and self-service models the solution can bring significant cost, effort and compliance benefits to help them focus on “Driving Effective Learning” than worrying about enabling cloud infrastructure, basic day to day applications and delegation of tasks to achieve scale. With a combination of pre-built solution and a managed services model to handhold customers with a full lifecycle of development, enhancement and support services, Relevance Lab can be your trusted partner for digital learning enablement.

          For demo video, please click here.

          To learn more about this solution or participate in using the same for your internal needs feel free to contact

          HPC Cloud Adoption Dilemma – How to Unlock the Potential without Surprises on Migration Complexity and Cost Management?
          Build Your Own Supercomputers in AWS Cloud with Ease – Research Gateway Allows Cost, Governance and Self-service with HPC and Quantum Computing
          Enabling Frictionless Scientific Research in the Cloud with a 30 Minutes Countdown Now!


          2023 Blog, Digital Blog, Blog, Featured, Feature Blog

          Relevance Lab’s (RL) focus on addressing the digital transformation jigsaw puzzle has a strategic investment in leveraging Products & Platforms to create a unique differentiation and competitive advantage. We are a specialist Cloud, DevOps, Automation, and Analytics Services company with an IP (Intellectual Property) led technology strategy. This helps our customers achieve frictionless business outcomes by leveraging cloud across their infrastructure, applications, and data.

          We optimize IT spending with smart cloud workload migration, reducing ongoing operations costs by leveraging automation & governance, speeding up innovation in the delivery of new software products with Agile & DevOps, and getting key real-time business insights with Actionable Analytics.

          The key platforms and playbooks that we have are the following:

          RLCatalyst provides an “Automation-First” approach for Enterprise Cloud Orchestration across “Plan, Build & Run” lifecycle, leveraging our unique IP. A pre-built library of quick-starts, BOTs and Open-source solutions helps customers use Cloud “The Right Way” focused on best practices like “Infrastructure as Code” and “Compliance as Code”. We also have unique specialization on AWS and ServiceNow platforms leveraged to provide Intelligent Cloud Operations & managed services with our ServiceOne platform covering workload migration, security, governance, CMDB, ITSM, and DevOps.

          SPECTRA provides a Digital and Agile Analytics platform that helps build enterprise data lakes and Supply Chain analytics with multiple ERP systems connectors (SAP, Oracle, Dynamics, JDE, etc.). It also provides a smart-document search engine for Google-like features on enterprise digital documents (images, PDF, engg drawings, etc.). We leverage the Digital platforms for Frictionless Application modernization and Cloud Product Engineering services extending across platforms covering content, commerce, CRM, and supply chain (Adobe, Shopify, SFDC, Oracle Fusion, Azure PowerApps, Services & ADF) integrated with actionable insights from SPECTRA.

          The figure above explains our company’s focus in driving frictionless IT and business operations leveraging these key platforms. The focus on a “coded business model” that the platforms deliver help us engage across the full lifecycle with customers covering the following stages:

          • Assess the key customer needs as each customer has a unique model we evaluate based on 3C’s (Culture, Content, & Constraints)
          • Standardize the internal systems, processes, engineering practices, and governance
          • Automate everything repetitive impacting speed, costs, quality, and compliance
          • Accelerate the achievement of business objectives with faster software delivery, better operational excellence, and real-time Agile Analytics

          RLCatalyst Platform and ServiceOne Solution
          RLCatalyst is an intelligent automation platform built with DevOps, Cloud, and Automation features covering infrastructure, applications, data, and workflows. RLCatalyst common services foundation is built using an open architecture in line with the industry standards and can be customized. On top of the foundation services, a set of specialized products, solutions, and services are created to cover the specialized needs of customers. Following are a few key foundation themes for RLCatalyst:

          • Built on Open-source products to provide flexibility and scalability for hybrid cloud deployments
          • Uses “Infrastructure as Code” best practices and DevOps standards covering CI/CD, end-to-end monitoring, and orchestration
          • The platform is built to have a UI Portal front-end, Node.JS API-based backend, integration layer for executing BOTs, and database layer based on NoSQL
          • The core concept uses a “self-aware” paradigm to embed dynamic configurations, end-to-end monitoring, and dynamic CMDB to enable smart operations using other ITSM and Cloud platforms
          • The Cloud Portal drives self-service models of DevOps and can be customized to add domain-specific business rules for customers or industry type
          • There is “Compliance as Code” embedded into the design to make sure customers can be aligned with well-architected principles
          • The platform is built on top of AWS and ServiceNow ecosystem but can also be deployed on-prem or other cloud platforms
          • The solutions are pre-integrated with other popular DevOps and Cloud tools like Docker, Chef, Anisible, Terraform, Jenkins, ELK, Sensu, Consul, etc
          • The platform comes with a pre-built library of BOTs and Quickstart templates

          The combination of RLCatalyst and ServiceOne integrated solution provides an intelligent automation architecture, as explained in the figure below. The key building blocks are:

          • Discover the underlying assets, health, costs, vulnerability, security, and compliance.
          • Automate using a framework of BOTs built with self-aware intelligence covering tasks, workflows, decisioning, and AI/ML algorithms.
          • Resolve at speed all service management tickets and requests with complex workflows & integration across multiple systems

          SPECTRA Platform and Business Process Automation
          SPECTRA, the AI-driven Analytics platform from Relevance Lab, based on open-source technology, can fast track your journey from data to actionable insights. It can process data from structured data from different ERP Systems based on pre-existing adapters and unstructured data from PDFs, Emails, engineering drawings, and commercial labels. Every organization has invested in a combination of tools, technologies and solutions to create their Data platforms. However, most of these platforms are built with legacy technologies or fragmented components. When companies try to leverage the new technologies of Big Data, Cloud Architectures and Artificial Intelligence to achieve more meaningful Analytics a pre-built Platform like SPECTRA can save tremendous efforts, costs and time to provide a scalable and flexible alternative.

          Similar to the RLCatalyst IT Optimization we leverage SPECTRA Platform for Business Optimization with Agile Analytics, as explained in figure below.

          We have also leveraged SPECTRA Platform and UiPath Integration to Achieve business process hyper automation, as explained briefly below.

          Customer Impact with RL Playbooks for IT and Business Transformation
          Relevance Lab leverages our strengths in platforms for all our customer engagements to bring out special value on services delivery in areas of:

          • Cloud Infrastructure Automation
          • Data Analytics Platforms
          • Digital Applications and Product Engineering
          • Intelligent Operations and DevOps

          The figure below highlights the value created for some of our key customers.

          We have adopted the following maturity model as a specialist technology company with significant investments on competency and IP creation that guides the investments in RLCatalyst and SPECTRA platforms.

          Level-1 Deep Technology Expertise Continuous learning and skills upgrade on latest/emerging Technologies focus across Cloud, Automation, Analytics, DevOps, Digital
          Level-2 Focus on Certifications – Basic & Advanced Promoting “Industry Certifications” to benchmark the competencies against the global standards and make this part of every developer’s career enhancement goal
          Level-3 Solutions and Best Practices (Process & Tools) Focus on customer solutions and recurring use cases to build a knowledge base of best practices across software engineering, operations excellence, business domains
          Level-4 Platform Focus “Codified Knowledge” in the form of Platforms for Data Analytics, DevOps, Cloud & Automation with source code in re-usable formats. Well-Architected Frameworks and leveraging open-source platforms with custom component enhancements & integrations to save effort, time, and improved quality with each new engagement
          Level-5 Product Offerings Prescriptive and pre-created products that customers can use in a “touchless” manner as SaaS or Marketplace offerings like a typical ISV solution with little or no dependency on associated professional services. Major benefit in enabling frictionless jumpstart on specific business problems.

          Relevance Lab has made significant investments in creating IT and Business Transformation platforms that provide us a key competitive advantage in unlocking value for our customers across DevOps, Analytics, Cloud, Automation and Digital Engineering. By following a service maturity model that goes beyond just headcount and competencies we have been able to bring the value of platform and products to solve the digital transformation puzzle for our 50+ customers.

          To know more about how can our products and platforms help feel free to contact us


          2023 Blog, Blog, Featured

          Relevance Lab, a leading provider of digital transformation services, today announced that it has secured the backing of US$700 Million CSP Fund II, a technology-focused private equity fund. With this investment Rajeev Srivastava & Sanjay Chakrabarty from CSP Fund II, will join the Board of Relevance Lab. This comes on the back of the recently announced merger of CIGNEX and Excellerent with Relevance Lab. The merged entity now has significant presence across North America, India and Ethiopia with a headcount of 1,500+ employees. The merger provides the platform with an integrated approach to address all the dimensions of digital transformation from its global development centers.

          Announcing the same, Vasu Sarangapani, recently appointed President & CEO, Relevance Lab, said, “I believe that with the backing of CSP Fund II, we will have the ability to accelerate business growth in our focus markets and execute on identified opportunities for M&As. This will also give us the opportunity to cross-sell and upsell within their larger portfolio”.

          Speaking on behalf of CSP Fund II, Rajeev Srivastava said, “Our core competency is in bringing small to mid-sized companies together under a unified platform and accelerating growth. We believe that this strategic merger, along with Vasu as President & CEO, provides the necessary impetus to scale Relevance Lab.”

          About Relevance Lab
          With its recent merger with CIGNEX and Excellerent, Relevance Lab is a leading provider of digital transformation & cloud services. The firm’s global delivery footprint now spans India, North America & Ethiopia; with 1,500+ global employees with innovation centers in India across Bangalore, Delhi NCR & Ahmedabad; and Ethiopia. The firm provides the platform to have economies of scale to have an integrated approach to address all the dimensions of digital transformation from its global development centers. To know more click here, Relevance Lab|Driving Frictionless Business.

          About Capital Square Partners
          About Capital Square Partners: Founded in 2014 in Singapore, Capital Square Partners is a private equity firm investing in cross-border technology and business services across Southeast Asia and India. Launched in December 2022, the US$ 700 Million CSP Fund II is building on a successful track-record of investing in global technology services companies. Over the past decade, the team of Sanjay Chakrabarty, Rajeev Srivastava, Mukesh Sharda, Bharat Rao (non-executive director) and Sameer Kanwar has managed in excess of US$1.3 billion in AUM and has operated and exited multiple companies in the technology services space, including Minacs, Indecomm and GAVS Technologies. Capital Square Partners holds a Capital Markets License from the Monetary Authority of Singapore, as per the Securities & Futures Act of the Government of Singapore. For more information click here.

          For original press release details click here.


          2023 Blog, ServiceOne, Blog, Featured

          Relevance Lab helps customers use cloud “The Right Way” with an Automation-First approach as part of our Governance360 solution offering. Customers implementing this solution go through a maturity model covering the following stages:

          • Basic Governance using AWS best practices with Control Tower, Security Hub, Management & Governance Lens
          • Advanced Governance with automation-led approach and deep integration with service management tools, vulnerability assessment, and remediations
          • Proactive and Preventive Governance with integrated end-to-end monitoring
          • Intelligent compliance with remediations

          As part of achieving this maturity model, it is important to have proper IT asset management, vulnerability assessment, and remediation models. A critical aspect of addressing infrastructure-level vulnerabilities depends on a smart patch management process. Patch management is a key part of your IT Operations to avoid potential exploitation and to ensure vulnerabilities are addressed on time by patching your systems, which includes operating systems, applications, and network components across your on-premises, cloud, or a hybrid setup.

          As shown below, patch management is a pivotal layer of security management and starts with the identification of assets from your asset inventory, followed by vulnerability assessment, patch management, security information & event management (SIEM), and visualization in the form of dashboards/reports.

          Let us see the steps to automate the entire lifecycle of patch management as shown in the below picture along with some industry-standard tools/platforms.

          • Step 1: All vulnerabilities pertaining to operating systems and software are captured through periodic scans using agents and analyzed.
          • Step 2: Using patching solutions, identify the missing patches and co-relate this to the vulnerabilities being addressed.
          • Step 3: Based on the criticality of the servers like Dev, Test, Prod, or criticality of the patches, the assets are identified for patching. A Change Request (CR) is raised with the details of what to patch, along with the patching windows, and the asset owners.
          • Step 4: Create a backup/snapshot before the patching activity and check for the patching client/agent availability on the servers planned for patching.
          • Step 5: Patch the servers during the agreed window, and if successful, CR is updated accordingly. In case of failure, CR is updated with a failure status.
          • Step 6: Post the patching activity, re-run the vulnerability scan to ensure all patch-related vulnerabilities are addressed and taken care of. The servers are also validated for the functionality of the applications before the CR can be closed.

          Use Case Benefits for Customers
          By automating patch management, customers can have near real-time visibility to the security compliance of their infrastructure and ensure an ongoing periodic process of patching is enabled, and having a 360-view of their IT infrastructure using dashboards. Enabling automated patching can save a lot of time and resources.

          Compliance Benefits:

          • Secured and centralized way of monitoring dashboard
          • Automated patching
          • Optical consistency across all businesses
          • Providing ease of security auditing
          • Periodic & timely notifications of the compliance/non-compliance status report to IT teams or individuals

          The IT team can create their own custom patch baselines and decide which patches to auto-approve by using the following categories.

          • Operating Systems: Windows, Amazon Linux, Ubuntu Server, etc.
          • Product Name: e.g. RHEL 6.5, Amazon Linux 2014.089, Windows Servers 2012, Windows Server 2012 R2, etc.
          • Classification: Critical updates, security updates, etc.
          • Severity: Critical, important, etc.

          Use Case of Hybrid Setup Patch Management
          As shown in the sample below, there are 2 environments Prod, and Dev, referred to as Patch Groups. This helps to avoid deploying patches to the wrong set of instances. A patch group must be defined with the tag key Patch Group. For example, we have created a patch group tag key called Dev below. A fleet of instances that have these tags can be patched using this approach.

          Details of the Architecture

          • AWS Systems Manager gathers asset inventory details and a pre-configured maintenance window automatically scans for the latest patches for the server groups at a scheduled time.
          • The automated patch function lambda is scheduled to run daily to collect the patch group and maintenance window details. It also creates the patch group and maintenance schedule tags on the managed instances.
          • This lambda function then creates or updates the right patch groups and maintenance schedules, associates the patch groups with the patch baselines, configures the patch scans, and deploys the patching task. You can also notify users of impending patches using CloudWatch Events.
          • As per the maintenance schedule, the events will send patch notifications to the application teams with the details of the impending patch operation.
          • Patch Manager then initiates the patching based on the predefined window and patch groups.
          • Details about patching are retrieved using resources data sync in Systems Manager and published to a S3 bucket.
          • Using this data from the S3 bucket, you can build a visualization dashboard about the patch compliance in Amazon QuickSight.

          As explained earlier, visualization is an essential layer showing the near real-time security status of your IT infrastructure. These can be a dashboard, as shown below.

          Getting Started
          Patch Management is available as a professional service offering and also as an AWS marketplace offering under Governance360. Below are the steps to take the customer from discovery to steady state.

          Step-1 Discovery Assess the current landscape of Process & Tools/Technology
          Step-2 Recommend Present the current gaps and benchmark against industry standards
          Step-3 Plan and Implement Design and implement the proposed solution in a phased manner
          Step-4 Ongoing Bring the solution to a stable state/BAU (Business As Usual)

          In this blog post, we covered the key aspects of automated patch management for enterprises. Relevance Lab has implemented automated patch management solutions, which is part of our Automation Factory Suite for its key customers bringing in better detection, assessment and compliance for their Cloud Governance. The entire solution is available as a re-usable framework that can save new enterprises 6+ months of time, efforts and costs for new deployments.

          To know more about our Governance360 offering and its building blocks, including automated patch management, feel free to contact

          Automated Patch Management for Cloud & Data Centers


          2023 Blog, Blog, Featured

          CIGNEX and Excellerent today announced their merger with Relevance Lab, to become a global powerhouse in digital transformation and cloud services. With this merger, Relevance Lab, headquartered in Singapore, will have delivery presence across North America, India and Ethiopia and a global headcount of 1500+ employees. While Relevance Lab excels in DevOps/Automation on Infrastructure, Applications and Data, CIGNEX is a leader in Open-Source Technologies and Cloud that are used to engineer/deploy digital transformations & robotic process automation applications; and Excellerent, besides its Agile Engineering prowess, provides a unique differentiator with its development center in Ethiopia. The merger provides the platform economies of scale and an integrated approach to address all the dimensions of digital transformation from its global development centers. Incumbent management of the respective companies will continue in their new roles under the new CEO’s leadership.

          With this merger, Vasu Sarangapani has joined Relevance Lab as it’s new President & CEO. Vasu comes with over 30 years of experience in Technology Services. Prior to this role, Vasu was with GlobalLogic Inc, where he was the Chief Growth Officer and prior to that, Chief Sales Officer of the company. In his tenure spanning 9 years, he helped expand the company’s global business significantly and played an instrumental role in providing multiple exits for the PE’s.

          Explaining the rationale behind the merger, Vasu Sarangapani, incoming President & CEO, Relevance Lab, said, “Digital Transformation for enterprises is an existential necessity today and CXO’s want to accomplish this quickly by leveraging technology and partnerships to gain even the smallest competitive advantage. I strongly believe that the merged entity, with its deep technology expertise and assets driven approach, is well positioned to capture a big chunk of this digital services market and am very excited to be a part of this compelling story.”

          “Given that the 3 companies had a common investor and the management team’s high levels of comfort working with each other over the years, it was only natural for us to merge as one company to unify our complementary technology offerings and service our customers. Under the leadership of Vasu, we look forward to rapidly increasing value creation for all stakeholders,” commented Raja Nagarajan, Founder & incumbent CEO, Relevance Lab.

          About Relevance Lab
          Relevance Lab is a specialized technology services company with technology assets in the DevOps, Cloud, Automation, Service Delivery and Agile Analytics domains. Using an asset leveraged delivery model, Relevance Lab helps global organizations achieve frictionless business transformation across Infrastructure, Applications and Data. For more details visit

          For original press release details click here.


          2023 Blog, SWB Blog, Blog, Featured

          While there is rapid momentum for every enterprise in the world in consuming more Cloud Assets and Services, there is still lack of maturity in adopting an “Automation-First” approach to establish Self-Service models for Cloud consumptions due to fear of uncontrolled costs, security & governance risks and lack of standardized Service Catalogs of pre-approved Assets & Service Requests from Central IT groups. Lack of delegation and self-service has a direct impact on speed of innovation and productivity with higher operations costs.

          Working closely with AWS Partnership we have now created a flexible platform for driving faster adoption of Self-Service Cloud Portals. The primary needs for such a Self-Service Cloud Portal are the following.

          • Adherence to Enterprise IT Standards
            • Common architecture
            • Governance and Cost Management
            • Deployment and license management
            • Identity and access management
          • Common Integration Architecture with existing platforms on ITSM and Cloud
            • Support for ServiceNow, Jira, Freshservice and Standard Cloud platforms like AWS
          • Ability to add specific custom functionality in the context of Enterprise Business needs
            • The flexibility to add business specific functionality is key to unlocking the power of self-service models outside the standard interfaces already provided by ITSM and Cloud platforms

          A common way of identifying the need for a Self-Service Cloud portal is based on following needs.

          • Does your enterprise already have any Self-Service Portals?
          • Do you have a large user base internally or with external users requiring access to Cloud resources?
          • Does your internal IT have the bandwidth and expertise to manage current workloads without impacting end user response time expectations?
          • Does your enterprise have a proper security governance model for Cloud management?
          • Are there significant productivity gains by empowering end users with Self-Service models?

          Working with AWS partnership and an our existing customer, we see a growing need for Self-Service Cloud Portals in 2023 predominantly centred around two models.

          • Enterprises with existing ITSM investments and need to leverage that for extending to Cloud Management
          • Enterprises extending needs outside enterprise users with custom Cloud Portals

          The roadmap to Self-Service Cloud portals is specific to every enterprise needs and needs to leverage the existing adoption and maturity of Cloud and ITSM platforms as explained below. With Relevance Lab RLCatalyst products we help enterprises achieve the maturity in a cost effective and expedited manner.

          Examples of Self-Service Cloud Portals

          Standard Needs Platform Benefits
          Look-n-Feel of Modern Self-Service Portals Professional and responsive UI Design with multiple themes available, customizations allowed
          Standards based Architecture & Governance Tightly Built On AWS products and AWS Well Architected with pre-built Reference Architecture based Products
          Pre-built Minimum Viable Product Needs 80-20 Model – Pre-built vs Customizations based on key components of core functionality
          Proprietary vs Open Source? Open-source foundation with source code made available built on MEAN Stack
          Access Control, Security and Governance Standard Options Pre-built, easy extensions (SAML Based). Deployed with enterprise grade security and compliances
          Rich Standard Pre-Build Catalog of Assets and Services Comes pre-built with 100+ catalog items covering all standard Asset and Services needs catering to 50% of any enterprise infrastructure, applications and service delivery needs

          Explained below is a sample AWS Self-Service Cloud for driving Scientific Research.

          Getting started
          To make is easier for enterprises for experiencing the power of Self-Service Cloud Portals we are offering two options based on enterprise needs.

          • Hosted SAAS offering of using our Multi-tenant Cloud Portal with ability to connect to your existing Cloud Accounts and Service Catalogs
          • Self-Hosted RLCatalyst Cloud Portal product with option to engage us for professional services on customizations, training, initial setup & onboarding needs

          Pricing for the SAAS offering is based on user based monthly subscription while for self-hosting model an enterprise support model pricing is available for the open source solution that allows enterprises the flexibility to use this solution without proprietary lock-ins.

          The typical steps to get started are very simple covering the following.

          • Setup an organization and business units or projects aligned with your Cloud Accounts for easy billing and access control tracking
          • Setup users and roles
          • Setup Budgets and controls
          • Setup standard catalog of items for users to order
          • With the above enterprises are up to speed to use Self-Service Cloud Portals in less than 1-Day with inbuilt controls for tracking and compliance

          Cloud Portals for Self-Service is a growing need in 2023 and we see the momentum continuing for next year as well. Different market segments have different needs for Self-Service Cloud portals as explained in this Blog.

          • Scientific Research community is interested in a Research Gateway Solution
          • University IT looks for a University in a Box Self-Service Cloud
          • Enterprises using ServiceNow want to extend the internal Self-Service Portals
          • Enterprises are also developing Hybrid Cloud Orchestration Portals
          • Enterprises looking at building AIOps Portal needs monitoring, automation and service management
          • Enabling Virtual Training Labs with User and Workspace onboarding
          • Building an integrated Command Centre requires an Intelligent Monitoring portal
          • Enterprise Intelligent Automation Portal with ServiceNow Connector

          We provide pre-build solutions for Self-Service Cloud Portals and a base platform that can be easily extended to add new functionality for customization and integration. A number of large enterprises and universities are leveraging our Self-Service Cloud portal solutions using both existing ITSM tools (Servicenow, Jira, Freshservice) and RLCatalyst products.

          To learn more about using AWS Cloud or ITSM solutions for Self-Service Cloud portals contact


          2023 Blog, AWS Platform, Blog, Featured, Feature Blog

          Relevance Lab (RL) is a specialist company in helping customers adopt cloud “The Right Way” by focusing on an “Automation-First” and DevOps strategy. It covers the full lifecycle of migration, governance, security, monitoring, ITSM integration, app modernization, and DevOps maturity. Leveraging a combination of services and products for cloud adoption, we help customers on a “Plan-Build-Run” transformation that drives greater velocity of product innovation, global deployment scale, and cost optimization for new generation technology (SaaS) and enterprise companies.

          In this blog, we will cover some common themes that we have been using to help our customers for cloud adoption as part of their maturity journey.

          • SaaS with multi-tenant architecture
          • Multi-Account Cloud Management for AWS
          • Microservices architecture with Docker and Kubernetes (AWS EKS)
          • Jenkins for CI/CD pipelines and focus on cloud agnostic tools
          • AWS Control Tower for Cloud Management & Governance solution (policy, security & governance)
          • DevOps maturity models
          • Cost optimization, agility, and automation needs
          • Standardization for M&A (Merger & Acquisitions) integrations and scale with multiple cloud provider management
          • Spectrum of AWS governance for optimum utilization, robust security, and reduction of budget
          • Automation/BOT landscape, how different strategies are appropriate at different levels, and the industry best practice adoption for the same
          • Reference enterprise strategy for structuring DevOps for engineering environment which has cloud native development and the products which are SaaS-based.

          Relevance Lab Cloud and DevOps Credentials at a Glance

          • RL has been a cloud, DevOps, and automation specialist since inception in 2011 (10+ years)
          • Implemented 50+ successful customer cloud projects covering Plan-Build-Run lifecycle
          • Globally has 250+ cloud specialists with 100+ certifications
          • Cloud competencies cover infra, apps, data, and consulting
          • Provide deep consulting and technology in cloud and DevOps
          • RL products available on AWS and ServiceNow marketplace recognized globally as a specialist in “Infrastructure Automation”
          • Deep Architecture know-how on DevOps with microservices, containers, Well-Architected principals
          • Large enterprise customers with 10+M$ multi-year engagements successfully managed
          • Actively managing 7000+ cloud instances, 300+ Applications, annual 5.0+M$ cloud consumption, 20K+ annual tickets, 100+ automation BOTs, etc.

          Need for a Comprehensive Approach to Cloud Adoption
          Most enterprises today have their applications in the cloud or are aggressively migrating new ones for achieving the digital transformation of their business. However, the approach requires customers to think about the “Day-After” Cloud in order to avoid surprises on costs, security, and additional operations complexities. Having the right Cloud Management not only helps eliminate unwanted costs and compliance, but it also helps in optimal use of resources, ensuring “The Right Way” to use the cloud. Our “Automation- First Approach” helps minimize the manual intervention thereby, reducing manual prone errors and costs.

          RL’s matured DevOps framework helps in ensuring the application development is done with accuracy, agility, and scale. Finally, to ensure this whole framework of Cloud Management, Automation and DevOps are continued in a seamless manner, you would need the right AIOps-driven Service Delivery Model. Hence, for any matured organizations, the below 4 themes become the foundation for using Cloud Management, Automation, DevOps, and AIOps.

          Cloud Management
          RL offers a unique methodology covering Plan-Build-Run lifecycle for Cloud Management, as explained in the diagram below.

          Following are the basic steps for Cloud Management:

          Step-1: Leverage
          Built on best practices offered from native cloud providers and popular solution frameworks, RL methodology leverages the following for Cloud Management:

          • AWS Well-Architected Framework
          • AWS Management & Governance Lens
          • AWS Control Tower for large scale multi-account management
          • AWS Service Catalog for template-driven organization standard product deployments
          • Terraform for Infra as Code automation
          • AWS CloudFormation Templates
          • AWS Security Hub

          Step-2: Augment
          The basic Cloud Management best practices are augmented with unique products & frameworks built by RL based on our 50+ successful customer implementations covering the following:

          • Quickstart automation templates
          • AppInsights and ServiceOne – built on ITSM
          • RLCatalyst cloud portals – built on Service Catalog
          • Governance360 – built on Control Tower
          • RLCatalyst BOTS Automation Server

          Step-3: Instill
          Instill ongoing maturity and optimization using the following themes:

          • Four level compliance maturity model
          • Key Organization metrics across assets, cost, health, governance, and compliance
          • Industry-proven methodologies like HIPAA, SOC2, GDPR, NIST, etc.

          For Cloud Management and Governance, RL has Solutions like Governance360, AWS Management and Governance lens, Cloud Migration using CloudEndure. Similarly, we have methodologies like “The Right Way” to use the cloud, and finally Product & Platform offerings like RLCatalyst AppInsights.

          RL promotes an “Automation-First” approach for cloud adoption, covering all stages of the Plan-Build-Run lifecycle. We offer a mature automation framework called RLCatalyst BOTs and self-service cloud portals that allow full lifecycle automation.

          In terms of deciding how to get started with automation, we help with an initial assessment model on “What Can Be Automated” (WCBA) that analyses the existing setup of cloud assets, applications portfolio, IT service management tickets (previous 12 months), and Governance/Security/Compliance models.

          For the Automation theme, RL has Solutions like Automation Factory, University in a Box, Scientific Research on Cloud, 100+ BOTs library, custom solutions on Service WorkBench for AWS. Similarly, we have methodologies like Automation-First Approach, and finally Product & Platform offerings like RL BOTs automation Engine, Research Gateway, ServiceNow BOTs Connector, UiPath BOTs connector for RPA.

          The following blogs explain in more detail our offerings on automation.

          DevOps and Microservices
          DevOps and microservices with containers are a key part of all modern architecture for scalability, re-use, and cost-effectiveness. RL, as a DevOps specialist, has been working on re-architecting applications and cloud migration across different segments covering education, pharma & life sciences, insurance, and ISVs. The adoption of containers is a key building block for driving faster product deliveries leveraging Continuous Integration and Continuous Delivery (CI/CD) models. Some of the key considerations followed by our teams cover the following for CI/CD with Containers and Kubernetes:

          • Role-based deployments
          • Explicit declarations
          • Environment dependent attributes for better configuration management
          • Order of execution and well-defined structure
          • Application blueprints
          • Repeatable and re-usable resources and components
          • Self contained artifacts for easy portability

          The following diagram shows a standard blueprint we follow for DevOps:

          For the DevOps & Microservices theme, RL has Solutions like CI/CD Cockpit solution, Cloud orchestration Portal, ServiceNow/AWS/Azure DevOps, AWS/Azure EKS. Similarly, we have methodologies like WOW DevOps, DevOps-driven Engineering, DevOps-driven Operations, and finally Product & Platform offerings like RL BOTs Connector.

          AIOps and Service Delivery
          RL brings in unique strengths across AIOps with IT Service Delivery Management on platforms like ServiceNow, Jira ServiceDesk and FreshService. By leveraging a platform-based approach that combines intelligent monitoring, service delivery management, and automation, we offer a mature architecture for achieving AIOps in a prescriptive manner with a combination of technology, tools, and methodologies. Customers have been able to deploy our AIOps solutions in 3 months and benefit from achieving 70% automation of inbound requests, reduction of noise on proactive monitoring by 80%, 3x faster fulfillment of Tickets & SLAs with a shift to a proactive DevOps-led organization structure.

          For the AIOps & Service Delivery theme, RL has Solutions like AIOps Blueprint, ServiceNow++, End to End Automated Patch Management, Asset Management NOC & ServiceDesk. Similarly, we have methodologies like ServiceOne and finally Product & Platform offerings like ServiceOne with ServiceNow, ServiceOne with FreshService, RLCommand Center.

          RL offers a combination of Solutions, Methodologies, and Product & Platform offerings covering the 360 spectrum of an enterprise Cloud & DevOps adoption across 4 different tracks covering Cloud Management, Automation, DevOps, and AIOps. The benefits of a technology-driven approach that leverages an “Automation-First” model has helped our customer reduce their IT spends by 30% over a period of 3 years with 3x faster product deliveries and real-time security & compliance.

          To know more about our Cloud Centre of Excellence and how we can help you adopt Cloud “The Right Way” with best practices leveraging Cloud Management, Automation, DevOps, and AIOps, feel free to write to

          Reference Links
          Considerations for AWS AMI Factory Design


          2022 Blog, Analytics, SPECTRA Blog, Blog, Featured

          The Consumer Packaged Goods (CPG) Industry is one of the largest industries on the planet. From food and beverage to clothes to stationary, it is impossible to think of a moment in our lives without being touched or influenced by this sector. If there is one paradigm around which the industry revolves, regardless of the sub-sector or the geography, it is the fear of stock outs. Studies indicate that when a customer finds a product unavailable, 31% are likely to switch over to a competitor when it happens for the first time. It becomes 50% when this occurs for a second time and rises to 70% when this happens for a third time.

          Historically, the panacea for this problem has been to overstock. While this reduced the risk of stock outs to a great extent, it induced a high cost for holding the inventory and increased risk of obsolescence. It also created a shortage of working capital since a part of it is always locked away in holding excess inventory. This additional cost is often passed on to the end customer. Over time, an integrated planning solution which could predict demand, supply and inventory positions became a key differentiator in the CPG industry since it helped rein in costs and become competitive in an industry which is extremely price sensitive.

          Although theoretically, a planning solution should have been able to solve the inventory puzzle, practically, a lot of challenges kept limiting its efficacy. Conventional planning solutions have been built based on local planning practices. Such planning solutions have had challenges negotiating the complex demand patterns of the customers which are influenced by general consumer behaviour and also seasonal trends in the global market. As a result the excess inventory problem stays, which gets exacerbated at times due to bullwhip effect.

          This is where the importance of a global integrated Production Sales Inventory (PSI) solutions comes in. But usually, this is easier said than done. Large organizations face multiple practical challenges when they attempt to implement this. Following are the typical challenges that large organizations face

          • Infrastructural Limitations
            Using conventional systems of Business Intelligence of Planning systems would require very heavy investment in infrastructure and systems. Also the results may not be proportionate to the investments made.
          • Data Silos
            PSI requires data from different departments including sales, production, and procurement/sourcing. Even if the organization has a common ERP, the processes and practices in each department might make it difficult to combine data and get insights.
            Another significant hurdle is the fact that larger organizations usually tend to have multiple ERPs for handling local transactions aligned to geographical markets. Each ERP or data source which does not talk to other systems becomes siloed. The complexities increase when the data formats and tables are incompatible, especially, when the ERPs are from different vendors.
          • Manual Effort
            Harmonizing the data from multiple systems and making them coherent involves a huge manual effort in designing, building, testing and deployment if we follow conventional mode. The prohibitive costs involved, not to mention the human effort involved is a huge challenge for most organizations.

          Relevance Lab has helped multiple customers tide over the above challenges and get a faster return on their investments.

          Here are the steps we follow to achieve a responsive global supply chain

          • Gather Data: Collate data from all relevant systems
            Leveraging data from as many relevant sources (both internal and external) as possible is one of the most important steps in ensuring a responsive global supply chain. The challenge of handling the huge data volume is addressed through the use of big data technologies. The data gathered is then cleansed and harmonized using SPECTRA, Relevancelab big data/analytics platform. SPECTRA can then combine the relevant data from multiple sources, and refresh the results at specified periodic intervals. One point of note here is that Master Data harmonization, that usually consumes months of effort can be significantly accelerated with the SPECTRA’s machine learning and NLP capabilities.

          • Gain Insights: Know the as-is states from intuitive visualizations
            The data pulled in from various sources can be combined to see the snapshot of inventory levels across the supply chain. SPECTRA’s built-in data models and quasi plug and play visualizations ensure that users get a quick and accurate picture of their supply chain. Starting with a bird’s eye view of the current inventory levels across various types of stocking locations and across each inventory type, the visualization capabilities of SPECTRA can be leveraged to have a granular view of the current inventory positions or backlog orders or compare sales with the forecasts. This a critical step in the overall process as this helps organizations to clearly define their problems and identify likely end states. For example, the organization could go for a deeper analysis to identify slow moving and obsolete inventory or fine tune their planning parameters.

          • Predict: Use big data to predict inventory levels
            The data from various systems can be used to predict the likely inventory levels based on service level targets, demand predictions, production and procurement information. Time series analysis is used to predict the lead time for production and procurement. Projected inventory level calculations for future days/weeks, thus calculated, is more likely to reflect the actual inventory levels since the uncertainties, both external and internal, have been well accounted for.

          • Act: Measurement and Continuous Improvement
            Inventory management is a continuous process. The above steps would provide a framework for measuring and tracking the performance of the inventory management solution and make necessary course corrections based on real time feedback.

          Successful inventory management is one of the basic requirements for financial success for companies in the Consumer Packaged Goods Sector. There is no perfect solution to achieve this as the customer needs and environment are dynamic and the optimal solution could only be reached iteratively. Relevancelab framework to address inventory management combining deep domain experience with SPECTRA’s capabilities like NLP for faster master data management & harmonization, pre-built data models, quasi plug and play visualizations and custom algorithms offer a faster turn-around and quicker Return-on-Investment. Additionally, the comprehensive process ensures that the data is massaged and prepped for both broader and deeper analysis of the supply chain and risk in the future.

          Additional references

          To learn how you can leverage ML and AI within your customer retention strategy, please reach out to


          2022 Blog, Digital Blog, Blog, Featured

          In our increasingly digitized world, companies across industries are embarking on digital transformation journeys to transform their infrastructure, application architecture and footprint to a more modern technology stack, one that allows them to be nimble and agile when it comes to maintainability, scalability, easier deployment (smaller units can be deployed frequently).

          Old infrastructure and the traditional ways of building applications are inhibiting growth for large enterprises, mid-sized and small businesses. Rapid innovation is needed to rollout new business models, optimize business processes, and respond to new regulations. Business leaders and employees understand the need for this agility – everyone wants to be able to connect to their Line of Business (LOB) systems through mobile devices or remotely in a secure and efficient manner, no matter how old or new these systems are, and this is where Application Modernization comes in to picture.

          A very interesting use case was shared with us by our large Financial Asset management customer. They had a legacy application, which was 15+ years old and having challenges like tightly coupled business modules, code base/solution maintainability, complexity in implementing lighter version of workflow, modular way of deploying key features, legacy technology stack based application, etc. To solve this problem we had a solid envisioning phase for future state application by considering the next generation solution architecture approach, latest technology stack, value add for the business – lighter version of workflow engine, responsive design & End–to–End (E2E) DevOps solution.

          Legacy Application Modernizations/Platform Re-Engineering
          Legacy application modernization projects intend to create new business value from existing, aging applications by updating or replacing them with modern technologies, features and capabilities. By migrating the legacy applications, business can include the latest functionalities that better align with where business needs transformation & success.

          These initiatives are typically designed and executed with phased rollouts that will replace certain functional feature sets of the legacy application with each successive rollout, eventually evolving into a complete, new, agile, modern application that is feature-rich, flexible, configurable, scalable and maintainable in future.

          Monolithic Architecture Vs Microservices Architecture – The Big Picture

          Monolithic Architecture

          • Traditional way of building applications
          • An application is built as one large system and is usually one codebase
          • Application is tightly coupled and gets entangled as the application evolves
          • Difficult to isolate services for purposes such as independent scaling or code maintainability
          • Usually deployed on a set of identical servers behind a load balancer
          • Difficult to scale parts of the application selectively
          • Usually have one large code base and lack modularity. If developers community wants to update or change something, they access the same code base. So, they make changes in the whole stack at once

          The following diagram depicts an application built using Monolithic Architecture

          Microservices Architecture

          • Modern way of building applications
          • A microservice application typically consists of many services
          • Each service has multiple runtime instances
          • Each service instance needs to be configured, deployed, scaled, and monitored

          Microservices Architecture – Tenets
          The Microservices Architecture breaks the Monolithic application into a collection of smaller, independent units. Some of the salient features of Microservices are

          • Highly maintainable and testable
          • Autonomous and Loosely coupled
          • Independently deployable
          • Independently scalable
          • Organized around domain or business capabilities (context boundaries)
          • Owned by a small team
          • Owning their related domain data model and domain logic (sovereignty and decentralized data management) and could be based on different data storage technologies (SQL, NoSQL) and different programming languages

          The following diagram depicts an enterprise application built using Microservices Architecture by leveraging Microsoft technology stack.

          Benefits of Microservices Architecture

          • Easier Development & Deployment – Enables frequent deployment of smaller units. The microservices architecture enables the rapid, frequent, and reliable delivery of large, complex applications
          • Technology adoption/evolution – Enables an organization to evolve its technology stack
          • Process Isolation/Fault tolerance – Each service runs in its own process and communicates with other processes using standard protocols such as HTTP/HTTPS, Web Sockets, AMQP (Advanced Message Queuing Protocol)

          Today the Enterprise customers across the globe like eBay, GE Healthcare, Samsung, BMW, Boeing, etc. has been adopted Microsoft Azure platform for developing their Digital solutions. We at Relevance Lab also delivered numerous Digital transformational initiatives to our global customers by leveraging Azure platform and Agile scrum delivery methodology.

          The following diagram depicts an enterprise solution development life cycle leveraging Azure platform and it’s various components, which enables Agile scrum methodology for the E2E solution delivery

          Monolithic Architecture does have its strengths like development and deployment simplicity, easier debugging and testing and fewer cross-cutting concerns and can be a good choice for certain situations, typically for smaller applications.However, for larger, business critical applications, the monolithic approach can bring up challenges like technological barriers, scalability, tight coupling (rigidity) and hence makes it difficult to make changes, and development teams find them difficult to understand.

          By adopting Microservices architecture and Microsoft Azure Platform based solutions business could leverage below benefits

          • Easier, rapid development of enterprise solutions
          • Global team could be distributed to focus certain services development of the system
          • Organized around business capabilities, rapid infrastructure provisioning & application development – Technology team will be focused not just on technologies but also acquires business domain knowledge, organized around business capabilities and cloud infrastructure provisioning/ capacity planning knowledge
          • Offers modularizations for large enterprise applications, increases productivity and helps distributed team to focus on their specific modules and deliver them in speed and scale them based on the business growth

          For more details, please feel free to reach out to


          2022 Blog, Analytics, SPECTRA Blog, Blog, Featured

          If you are a business with a digital product or a subscription model, then you are already familiar with this key metric – “Customer Churn”.

          Customer Churn is the percentage of customers who stopped using your product during a given period. This is a critical metric, as it not only reflects customer satisfaction but it also has a big impact on your bottom line. A common rule of the thumb is that it costs 6-7 times to get a new customer versus keeping the customers you already have. In addition, existing customers are expected to spend more over time, and satisfied customers lead to additional sales through referrals. Market studies show that increasing customer retention by small percentage can boost revenues significantly. Further research reveals that most professionals consider that Churn is just as or more important a metric than new customer acquisitions.

          Subscription businesses strongly believe customers cancel for reasons that could be managed or fixed. “Customer Retention” is the set of strategies and actions that a company follows to keep existing customers from churning. Employing a data-driven customer retention strategy, and leveraging the power of big data and machine learning, offer significant opportunities for businesses to create a competitive advantage versus their peers that don’t.

          Relevance Lab (RL) recently helped a large US based Digital learning company benefit from a detailed churn analysis of its subscription customers, by leveraging the RL SPECTRA platform with machine learning. The portfolio included several digital subscription products used in school educational curriculums which are renewed annually during the start of the school calendar year. Each year, there were several customers that did not renew their licenses and importantly, this happened at the end of the subscription cycle; typically too late for the sales team to respond effectively.

          Here are the steps that the organisation took along the churn management journey.

          • Gather multiple data points to generate better insights
            As with any analysis, to figure out where your churn is coming from, you need to keep track of the right data. Especially with machine learning initiatives, the algorithms depend on large quantities of raw data to learn complex patterns. A sample list of data attributes could include online interactions with the product, clicks, page views, test scores, incident reports, payment information, etc, it could also include unstructured data elements such as reports, reviews and blog posts.

            In this particular example, the data was pulled from four different databases which contained the product platform data for our relevant geography. Data collected included product features, sales and renewal numbers, as well as student product usage, test performance statistics etc, going back to the past 4 years.

            Next, the data was cleansed to remove trial licenses, dummy tests etc, and to normalize missing data. Finally, the data was harmonized to bring all the information into a consolidated format.

            All the above pipelines were established using the SPECTRA ETL process. Now there was a fully functional data setup with cleaned data ordered in tables, to be used in the machine learning algorithms for churn prediction.

          • Predictive analytics use Machine Learning to know who is at risk
            Once you have the data, you are now ready to work on the core of your analysis, to understand where the risk of churn is coming from, and hence identify the opportunities for strengthening your customer relationships. Machine learning techniques are especially suited to this task, as they can churn massive amounts of historical data to learn about customer behavior, and then use this training to make predictions about important outcomes such as retention.

            On our assignment, the RL team tried out a number of machine learning models built-in within SPECTRA to predict the churn and zeroed in on a random forest model. This method is very effective when using inconsistent data sets, where the system can handle differences in behavior very effectively by creating a large number of random trees. In the end, the system provided a predicted rating for each customer to drop out of the system and highlighted the ones most at risk.

          • Define the most valuable customers
            Parallel to identifying customers at risk of churn, data can also be used to segment customers into different groups to identify how each group interacts with your product. In addition, data regarding frequency of purchase, purchase value, product coverage helps you to quickly identify which type of customers are driving the most revenue, versus customers which are a poor fit for your product. This will then allow you to adopt different communication and servicing strategies for each group, and to retain your most valuable customers.

            By combining our machine learning model output with the segmentation exercise, the result was a dynamic dashboard, which could be sorted/filtered by different criteria such as customer size and geographical location. This provided the opportunity to highlight the customers which were at the highest risk, from the joint viewpoint of attrition and revenue loss. This in turn enabled the client to effectively utilize sales team resources in the best possible manner.

          • Engage with the customers
            Now that you have identified your top customers who you are at risk of losing, the next step is to actively engage with them, to incentivise the customers to stay with you, by being able to help the customer achieve real value out of your product.

            The nature of engagement could depend on the stage the customer is in the relationship. Is the customer in the early stage of product adoption? This could then point to the fact that the customer is unable to get set up with your product. Here, you have to make sure that the customer has access to enough training material, maybe the customer requires additional onboarding support.

            If the customer is in the middle stage, it could be that the customer is not realizing enough business value out of your product. Here, you need to check in with your customer, to see whether they are making enough progress towards their goals. If the customer is in late stage, it is possible that they are looking at competitor offerings, or they were frustrated with bugs, and hence the discussion would need to be shaped accordingly.

            To tailor the nature of your conversation, you need to take a close look at the customer product interaction metrics. In our example, all the customer usage patterns, test performance, books read, word literacy, etc, were collected and presented as a dashboard, as a single point of reference for the sales and marketing team to easily review customer engagement levels, to be able to connect constructively with the customer management.

          If you are looking at reducing your customer churn and improving customer retention, it all comes down to predicting customers at risk of churn, analyzing the reasons behind churn, and then taking appropriate action. Machine learning based models are of particular help here, as they can take into account hundreds and even thousands of different factors, which may not be obvious or even possible to track for a human analyst. In this example, the SPECTRA platform helped the client sales team to predict the customers’ inclination for renewal of the specific learning product with 92% accuracy.

          Additional references
          Research from Bain and Co. shows that increasing customer retention by even 5% boosts revenues by 25% – 95%
          Reportfrom Brightback reveals Churn is just as or more important a metric than new customer acquisitions

          To learn how you can leverage machine learning and AI within your customer retention strategy, please reach out to


          PREVIOUS POSTSPage 1 of 12NO NEW POSTS