Your address will show here +12 34 56 78
2021 Blog, Blog, Featured

AWS Marketplace is a high-potential delivery mechanism for the delivery of software and professional services. The main benefit for customers is that they get a single bill from AWS for all their Infrastructure and Software consumption. Also, since AWS is already on the approved vendor list for many enterprises, it makes it easier for enterprises to consume software also from the same vendor.

Relevance Lab has always considered AWS Marketplace as one of the important channels for the distribution of its software products. In 2020 we had listed our RLCatalyst 4.3.2 BOTs Server product on the AWS Marketplace as an AMI-based product that a customer could download and run in their AWS account. This year, RLCatalyst Research Gateway was listed on the AWS Marketplace as a Software as a Service (SaaS) product.

This blog details some of the steps that a customer needs to go through to consume this product from the AWS Marketplace.


Step 1: The first step for a customer looking to find the product is to log in to their account and visit the AWS Marketplace. Then search for RLCatalyst Research Gateway. This will show the Research Gateway product at the top of the list in the results. Click on the link and this should lead to the details page.

The product details page lists the important details like.

  • Pricing information
  • Support information
  • Set up instructions

Step-2: The second step for the user is to subscribe to the product by clicking on the “Continue to Subscribe” button. This step will need the user to login into their AWS account (if not done earlier). The page which comes up will show the contract options that the user can choose. RLCatalyst Research Gateway (SaaS) offers three tiers for subscription.

  • Small tier (1-10 users)
  • Medium tier (11-25 users)
  • Large tier (unlimited users)

Also, the customer has the option of choosing a monthly contract or an annual contract. The monthly contract is good for customers who want to try the product or for those customers who would like a budget outflow that is spread over the year rather than a lump sum. The annual contract is good for customers who are already committed to using the product in the long term. An annual contract gets the customer an additional discount over the monthly price.

The customer also has to choose whether they want to contract to renew automatically or not.

One of the great features of AWS Marketplace is that the customer can modify the contract at any time and upgrade to a higher plan (e.g. Small tier to Medium or Large tier). The customer can also modify the contract to opt for auto-renewal at any time.

Step-3: The third step for the user is to click on the “Subscribe” button after choosing their contract options. This leads the user to the registration page where they can set up their RLCatalyst Research Gateway account.



This screen is meant for the Administrator persona to enter the details for the organization. Once the user enters the details, agrees to the End User License Agreement (EULA), and clicks on the Sign-up button, the process for provisioning the account is set in motion. The user should get an acknowledgment email within 12 hours and an email verification email within 24 hours.

Step-4: The user should verify their email account by clicking on the verification link in the email they receive from RLCatalyst Research Gateway.

Step-5: Finally, the user will get a “Welcome” email with the details of their account including the custom URL for logging into his RLCatalyst Research Gateway account. The user is now ready to login into the portal. On logging in to the portal the user will see a Welcome screen.


Step-6: The user can now set up their first Organizational Unit in the RLCatalyst Research Gateway portal by following these steps.

6.1 Navigate to settings from the menu at the top right.


6.2 Click on the “Add New” button to add an AWS account.


6.3 Enter the details of the AWS account.


Note that the account name given in this screen is any name that will help the Administrator to remember which OU and project this account is meant for.

6.4 The Administrator can repeat the procedure to add more than one project (consumption) account.

Step-7: Next the Administrator needs to add Principal Investigator users to the account. For this, he should contact the support team either by email (rlc.support@relevancelab.com) or by visiting the support portal (https://serviceone.relevancelab.com).

Step-8: The final step to set up an OU is to click on the “Add New” button on the Organizations page.


8.1 The Administrator should give a friendly name to the Organization in the “Organization Name” field. Then he should choose all the Accounts that will be consumed by projects in this account. A friendly description should be entered in the “Organization Description” field. Finally, choose a Principal Investigator who will manage/own this Organization Unit. Click “Add Organization” to add this OU.


Summary
As you can see above, ordering RLCatalyst Research Gateway (SaaS) from the AWS Marketplace makes it extremely easy for the user to get started, and end-users can start using the product within no time. Given the SaaS model, the customer does not need to worry about setting up the software in their account. At the same time, using their AWS account for the projects gives them complete transparency into the budget consumption.

In our next blog, we will provide step by step details of adding organizational units, projects & users to complete the next part of setup.

To know more, feel free to contact marketing@relevancelab.com.



0

2021 Blog, Blog, Feature Blog, Featured, Research Gateway

Working on non-scientific tasks such as setting up instances, installing software libraries, making model compile, and preparing input data are some of the biggest pain points for atmospheric scientists or any scientist for that matter. It’s challenging for scientists as it requires them to have strong technical skills deviating them from their core areas of analysis & research data compilation. Further adding to this, some of these tasks require high-performance computation, complicated software, and large data. Lastly, researchers need a real-time view of their actual spending as research projects are often budget-bound. Relevance Lab help researchers “focus on science and not servers” in partnership with AWS leveraging the RLCatalyst Research Gateway (RG) product.

Why RLCatalyst Research Gateway?
Speeding up scientific research using AWS cloud is a growing trend towards achieving “Research as a Service”. However, the adoption of AWS Cloud can be challenging for Researchers with surprises on costs, security, governance, and right architectures. Similarly, Principal Investigators can have a challenging time managing the research program with collaboration, tracking, and control. Research Institutions will like to provide consistent and secure environments, standard approved products, and proper governance controls. The product was created to solve these common needs of Researchers, Principal Investigator and Research Institutions.


  • Available on AWS Marketplace and can be consumed in both SaaS as well as Enterprise mode
  • Provides a Self-Service Cloud Portal with the ability to manage the provisioning lifecycle of common research assets
  • Gives a real time visibility of the spend against the defined project budgets
  • The principal investigator has the ability to pause or stop the project in case the budget is exceeded till the new grant is approved

In this blog, we explain how the product has been used to solve a common research problem of GEOS-Chem used for Earth Sciences. It covers a simple process that starts with access to large data sets on public S3 buckets, creation of an on-demand compute instance with the application loaded, copying the latest data for analysis, running the analysis, storing the output data, analyzing the same using specialized AI/ML tools and then deleting the instances. This is a common scenario faced by researchers daily, and the product demonstrates a simple Self-Service frictionless capability to achieve this with tight controls on cost and compliance.

GEOS-Chem enables simulations of atmospheric composition on local to global scales. It can be used off-line as a 3-D chemical transport model driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Global Modeling Assimilation Office (GMAO). The figure below shows the basic construct on GEOS-Chem input and output analysis.



Being a common use case, there is documentation available in the public domain by researchers on how to run GEOS-Chem on AWS Cloud. The product makes the process simpler using a Self-Service Cloud portal. To know more about similar use cases and advanced computing options, refer to AWS HPC for Scientific Research.



Steps for GEOS-Chem Research Workflow on AWS Cloud
Prerequisites for researcher before starting data analysis.

  • A valid AWS account and an access to the RG portal
  • A publicly accessible S3 bucket with large Research Data sets accessible
  • Create an additional EBS volume for your ongoing operational research work. (For occasional usage, it is recommended to upload the snapshot in S3 for better cost management.)
  • A pre-provisioned SageMaker Jupyter notebook to analyze output data

Once done, below are the steps to execute this use case.

  • Login to the RG Portal and select the GEOS-Chem project
  • Launch an EC2 instance with GEOS-Chem AMI
  • Login to EC2 using SSH and configure AWS CLI
  • Connect to a public S3 bucket from AWS CLI to list NASA-NEX data
  • Run the simulation and copy the output data to a local S3 bucket
  • Link the local S3 bucket to AWS SageMaker instance and launch a Jupyter notebook for analysis of the output data
  • Once done, terminate the EC2 instance and check for the cost spent on the use case
  • All costs related to GEOS-Chem project and researcher consumption are tracked automatically

Sample Output Analysis
Once you run the output files on the Jupyter notebook, it does the compilation and provides output data in a visual format, as shown in the sample below. The researcher can then create a snapshot and upload it to S3 and terminate the EC2 instance (without deleting the additional EBS volume created along with EC2).

Output to analyze loss rate and Air mass of Hydroxide pertaining to Atmospheric Science.


Summary
Scientific computing can take advantage of cloud computing to speed up research, scale-up computing needs almost instantaneously, and do all this with much better cost-efficiency. Researchers no longer need to worry about the expertise required to set up the infrastructure in AWS as they can leave this to tools like RLCatalyst Research Gateway, thus compressing the time it takes to complete their research computing tasks.

The steps demonstrated in this blog can be easily replicated for similar other research domains. Also, it can be used to onboard new researchers with pre-built solution stacks provided in an easy to consume option. RLCatalyst Research Gateway is available in SaaS mode from AWS Marketplace and research institutions can continue to use their existing AWS account to configure and enable the solution for more effective Scientific Research governance.

To learn more about GEOS-Chem use cases, click here.

If you want to learn more about the product or book a live demo, feel free to contact marketing@relevancelab.com.

References
Enabling Immediate Access to Earth Science Models through Cloud Computing: Application to the GEOS-Chem Model
Enabling High‐Performance Cloud Computing for Earth Science Modeling on Over a Thousand Cores: Application to the GEOS‐Chem Atmospheric Chemistry Model



0

2021 Blog, Blog, Feature Blog, Featured, Research Gateway

AWS provides a comprehensive, elastic, and scalable cloud infrastructure to run your HPC applications. Working with AWS in exploring HPC for driving Scientific Research, Relevance Lab leveraged their RLCatalyst Research Gateway product to provision an HPC Cluster using AWS Service Catalog with simple steps to launch a new environment for research. This blog captures the steps used to launch a simple HPC 1.0 cluster on AWS and roadmap to extend the functionality to cover more advanced use cases of HPC Parallel Cluster.

AWS delivers an integrated suite of services that provides everything needed to build and manage HPC clusters in the cloud. These clusters are deployed over various industry verticals to run the most compute-intensive workloads. AWS has a wide range of HPC applications spanning from traditional applications such as genomics, computational chemistry, financial risk modeling, computer-aided engineering, weather prediction, and seismic imaging to new applications such as machine learning, deep learning, and autonomous driving. In the US alone, multiple organizations across different specializations are choosing cloud to collaborate for scientific research.


Similar programs exist across different geographies and institutions across EU, Asia, and country-specific programs for Public Sector programs. Our focus is to work with AWS and regional scientific institutions in bringing the power of Supercomputers for day-to-day researchers in a cost-effective manner with proper governance and tracking. Also, with Self-Service models, the shift needs to happen from worrying about computation to focus on Data, workflows, and analytics that requires a new paradigm of considering prospects of serverless scientific computing that we cover in later sections.

Relevance Lab RLCatalyst Research Gateway provides a Self-Service Cloud portal to provision AWS products with a 1-Click model based on AWS Service Catalog. While dealing with more complex AWS Products like HPC there is a need to have a multi-step provisioning model and post provisioning actions that are not always possible using standard AWS APIs. In these situations requiring complex orchestration and post provisioning automation RLCatalyst BOTs provide a flexible and scalable solution to complement based Research Gateway features.

Building blocks of HPC on AWS
AWS offers various services that make it easy to set up an HPC setup.


An HPC solution in AWS uses the following components as building blocks.

  • EC2 instances are used for Master and Worker nodes. The master nodes can use On-Demand instances and the worker nodes can use a combination of On-Demand and Spot Instances.
  • The software for the manager nodes is built as an AMI and used for the creation of Master nodes.
  • The agent software for the managers to communicate with the worker nodes is built into a second AMI that is then used for provisioning the Worker nodes.
  • Data is shared between different nodes using a file-sharing mechanism like FSx Lustre.
  • Long-term storage uses AWS S3.
  • Scaling of nodes is done via Auto-scaling.
  • KMS for encrypting and decrypting the keys.
  • Directory services to create the domain name for using HPC via UI.
  • Lambda function service to create user directory.
  • Elastic Load Balancing is used to distribute incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, Lambda functions, and virtual appliances.
  • Amazon EFS is used for regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs.
  • AWS VPC to launch the EC2 instances in private cloud.

Evolution of HPC on AWS
  • HPC clusters first came into existence in AWS using the CfnCluster Cloud Formation template. It creates a number of Manager and Worker nodes in the cluster based on the input parameters. This product can be made available through AWS Service Catalog and is an item that can be provisioned from the RLCatalyst Research Gateway. The cluster manager software like Slurm, Torque, or SGE is pre-installed on the manager nodes and the agent software is pre-installed on the worker nodes. Also pre-installed is software that can provide a UI (like Nice EngineFrame) for the user to submit jobs to the cluster manager.
  • AWS Parallel Cluster is a newer offering from AWS for provisioning an HPC cluster. This service provides an open-source, CLI-based option for setting up a cluster. It sets up the manager and worker nodes and also installs controlling software that can watch the job queues and trigger scaling requests on the AWS side so that the overall cluster can grow or shrink based on the size of the queue of jobs.

Steps to Launch HPC from RLCatalyst Research Gateway
A standard HPC launch involves the following steps.

  • Provide the input parameters for the cluster. This will include
    • The compute instance size for the master node (vCPUs, RAM, Disk)
    • The compute instance size for the worker nodes (vCPUs, RAM, Disk)
    • The minimum and maximum number of worker nodes.
    • Select the workload manager software (Slurm, Torque, SGE)
    • Connectivity options (SSH keys etc.)
  • Launch the product.
  • Once the product is in Active state, connect to the URL in the Output parameters on the Product Details page. This connects you to the UI from where you can submit jobs to the cluster.
  • You can SSH into the master nodes using the key pair selected in the Input form.

RLCatalyst Research Gateway uses the CfnCluster method to create an HPC cluster. This allows the HPC cluster to be created just like any other products in our Research Gateway catalog items. Though this provisioning may take upto 45 minutes to complete, it creates an URL in the outputs which we can use to submit the jobs through the URL.

Advanced Use Cases for HPC

  • Computational Fluid Dynamics
  • Risk Management & Portfolio Optimization
  • Autonomous Vehicles – Driving Simulation
  • Research and Technical Computing on AWS
  • Cromwell on AWS
  • Genomics on AWS

We have specifically looked at the use case that pertains to BioInformatics where a lot of the research uses Cromwell server to process workflows defined using the WDL language. The Cromwell server acts as a manager that controls the worker nodes, which execute the tasks in the workflow. A typical Cromwell setup in AWS can use AWS Batch as the backend to scale the cluster up and down and execute containerized tasks on EC2 instances (on-demand or spot).



Prospect of Serverless Scientific Computing and HPC
“Function As A Service” Paradigm for HPC and Workflows for Scientific Research with the advent of serverless computing and its availability on all major computing platforms, it is now possible to take the computing that would be done on a High Performance Cluster and run it as lambda functions. The obvious advantage to this model is that this virtual cluster is highly elastic, and charged only for the exact execution time of each lambda function executed.

One of the limitations of this model currently is that only a few run-times are currently supported like Node.js and Python while a lot of the scientific computing code might be using additional run-times like C, C++, Java etc. However, this is fast changing and cloud providers are introducing new run-times like Go and Rust.


Summary
Scientific computing can take advantage of cloud computing to speed up research, scale-up computing needs almost instantaneously and do all this with much better cost efficiency. Researchers no longer worry about the expertise required to set up the infrastructure in AWS as they can leave this to tools like RLCatalyst Research Gateway, thus compressing the time it takes to complete their research computing tasks.

To learn more about this solution or participate in using the same for your internal needs feel free to contact marketing@relevancelab.com

References
Getting started with HPC on AWS
HPC on AWS Whitepaper
AWS HPC Workshops
Genomics in the Cloud
Serverless Supercomputing: High Performance Function as a Service for Science
FaaSter, Better, Cheaper: The Prospect of Serverless Scientific Computing and HPC



0

2021 Blog, AWS Governance, Blog, Cloud Blog, Feature Blog, Featured

Compliance on the Cloud is an important aspect in today’s world of remote working. As enterprises accelerate the adoption of cloud to drive frictionless business, there can be surprises on security, governance and cost without a proper framework. Relevance Lab (RL) helps enterprises speed up workload migration to the cloud with the assurance of Security, Governance and Cost Management using an integrated solution built on AWS standard products and open source framework. The key building blocks of this solution are.


Why do enterprises need Compliance as a Code?
For most enterprises, the major challenge is around governance and compliance and lack of visibility into their Cloud Infrastructure. They spend enormous time on trying to achieve compliance in a silo manner. Enterprises also spend enormous amounts of time on security and compliance with thousands of man hours. This can be addressed by automating compliance monitoring, increasing visibility across cloud with the right set of tools and frameworks. Relevance Labs Compliance as a Code framework, addresses the need of enterprises on the automation of these security & compliance. By a combination of preventive, detective and responsive controls, we help enterprises, by enforcing nearly continuous compliance and auto-remediation and there-by increase the overall security and reduce the compliance cost.

Key tools and framework of Cloud Governance 360°
AWS Control Tower: AWS Control Tower (CT) helps Organizations set up, manage, monitor, and govern a secured multi-account using AWS best practices. Setting up a Control Tower on a new account is relatively simpler when compared to setting it up on an existing account. Once Control Tower is set up, the landing zone should have the following.


  • 2 Organizational Units
  • 3 accounts, a master account and isolated accounts for log archive and security audit
  • 20 preventive guardrails to enforce policies
  • 2 detective guardrails to detect config violations

Apart from this, you can customize the guard rails and implement them using AWS Config Rules. For more details on Control Tower implementation, refer to our earlier blog here.

Cloud Custodian: Cloud Custodian is a tool that unifies the dozens of tools and scripts most organizations use for managing their public cloud accounts into one open-source tool. It uses a stateless rules engine for policy definition and enforcement, with metrics, structured outputs and detailed reporting for Cloud Infrastructure. It integrates tightly with serverless runtimes to provide real time remediation/response with low operational overhead.

Organizations can use Custodian to manage their cloud environments by ensuring compliance to security policies, tag policies, garbage collection of unused resources, and cost management from a single tool. Custodian adheres to a Compliance as Code principle, to help you validate, dry run, and review changes to your policies. The policies are expressed in YAML and include the following.

  • The type of resource to run the policy against
  • Filters to narrow down the set of resources

Cloud Custodian is a rules engine for managing public cloud accounts and resources. It allows users to define policies to enable a well managed Cloud Infrastructure, that’s both secure and cost optimized. It consolidates many of the ad hoc scripts organizations have into a lightweight and flexible tool, with unified metrics and reporting.



Security Hub: AWS Security Hub gives you a comprehensive view of your security alerts and security posture across your AWS accounts. It’s a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions like Cloud Custodian. You can also take action on these security findings by investigating them in Amazon Detective or by using Amazon CloudWatch Event rules to send the findings to an ITSM, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or to custom remediation playbooks.




Below is the snapshot of features across AWS Control Tower, Cloud Custodian and Security Hub, as shown in the table, these solutions complement each other across the common compliance needs.


SI No AWS Control Tower Cloud Custodian Security Hub
1 Easy to implement or configure AWS Control Tower within few clicks Light weight and flexible framework (Open source) which helps to deploy the cloud policies Gives a comprehensive view of security alerts and security posture across AWS accounts
2 It helps to achieve “Governance at Scale” – Account Management, Security, Compliance Automation, Budget and Cost Management Helps to achieve Real-time Compliance and Cost Management It’s a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services
3 Predefined Guardrails based on best practices – Establish / Enable Guardrails We need to define the rules and Cloud Custodian will enforce them Continuously monitors the account using automated security checks based on AWS best practices
4 Guardrails are enabled at Organization level If an account has any specific requirement to either include or exclude certain policies, those exemptions can be handled With a few clicks in the AWS Security Hub console, we can connect multiple AWS accounts and consolidate findings across those accounts
5 Automate Compliant Account Provisioning Can be included in Account creation workflow to deploy the set of policies to every AWS account as part of the bootstrapping process Automate continuous, account and resource-level configuration and security checks using industry standards and best practices
6 Separate Account for Centralized logging of all activities across accounts Offers comprehensive logs whenever the policy is executed and can be stored to S3 bucket Create and customize your own insights, tailored to your specific security and compliance needs
7 Separate Account for Audit. Designed to provide security and compliance teams read and write access to all accounts Can be integrated with AWS Config, AWS Security Hub, AWS System Manager and AWS X-Ray Support Diverse ecosystem of partner integrations
8 Single pane view dashboard to get visibility on all OU’S, accounts and guardrails Needs Integration with Security Hub to view all the policies which have been implemented in regions / across accounts Monitor your security posture and quickly identify security issues and trends across AWS accounts in Security Hub’s summary dashboard


Relevance Lab Compliance as a Code Framework
Relevance Lab’s Compliance as a Code framework is an integrated model between AWS Control Tower (CT), Cloud Custodian and AWS Security Hub. As shown below, CT helps organizations with pre-defined multi-account governance based on the best practices of AWS. The account provision is standardized across your hundreds and thousands of accounts within the organization. By enabling Config rules, you can bring in the additional compliance checks to manage your security, cost and account management. To implement events and action based policies, Cloud Custodian is implemented as a complementary solution to the AWS CT which helps to monitor, notify and take remediation actions based on the events. As these policies run in AWS Lambda, Cloud Custodian enforces Compliance-As-Code and auto-remediation, enabling organizations to simultaneously accelerate towards security and compliance. The real-time visibility into who made what changes from where, enables us to detect human errors and non-compliance. Also take suitable remediations based on this. This helps in operational efficiency and brings in cost optimization.

For eg: Custodian can identify all the non tagged EC2 instances or EBS volumes that are not mounted to an EC2 instance and notify the account admin that the same would be terminated in next 48 to 72 hours in case of no action. Having a Custom insight dashboard on Security Hub helps admin monitor the non-compliances and integrate it with an ITSM to create tickets and assign it to resolver groups. RL has implemented the Compliance as a Code for its own SaaS production platform called RLCatalyst Research Gateway, a custom cloud portal for researchers.



Common Use Cases


How to get started
Relevance Lab is a consulting partner of AWS and helps organizations achieve Compliance as a Code, using the best practices of AWS. While enterprises can try and build some of these solutions, it is a time consuming activity and error prone and needs a specialist partner. RL has helped 10+ enterprises on this need and has a reusable framework to meet the security and compliance needs. To start with Customers can enroll for a 10-10 program which gives an insight of their current cloud compliance. Based on an assessment, Relevance Lab will share the gap analysis report and help design the appropriate “to-be” model. Our Cloud governance professional services group also provides implementation and support services with agility and cost effectiveness.

For more details, please feel free to reach out to marketing@relevancelab.com



0

2021 Blog, AWS Platform, Blog, Featured

The year 2020 undoubtedly brought in unprecedented challenges with COVID-19 that required countries, governments, business and people respond in a proactive manner with new normal approaches. Certain businesses managed to respond to the dynamic macro environment with a lot more agility, flexibility and scale. Cloud computing had a dominant effect in helping businesses stay connected and deliver critical solutions. Relevance Lab scaled up its partnership with AWS to align with the new focus areas resulting in a significant increase in coverage of products, specialized solutions and customer impacts in the past 12 months compared to what has been achieved in the past.

AWS launched a number of new global initiatives in 2020 and in response to the business challenges resulting from the COVID-19 pandemic. The following picture describes those initiatives in a nutshell.


Relevance Lab’s Focus areas have been very well aligned
Given the macro environment challenges, Relevance Lab quickly aligned their focus on helping customers deal with the emerging challenges and those areas were very complementary to AWS’ global initiatives and responses listed above.

Relevance Lab’s aligned its AWS solutions & services across four dominant themes and by leveraging RL’s own IP based platforms and deep cloud competencies. The following picture highlights those key initiatives and the native AWS products and services leveraged in the process.


Relevance Lab also made major strides in improving Initiative to progress along AWS Partnership Maturity Levels for Specialized Consulting and Key Product offerings. The following picture highlights the major achievements in 2020 in a nutshell.



AWS Specializations & Spotlights
Relevance Lab is a specialist cloud managed services company with deep expertise in Devops, Service Automation and ITSM integrations. It also has an RLCatalyst Platform built to support Automation across AWS Cloud and ITSM platforms. RLCatalyst family of solutions help in Self-service Cloud portals, IT service monitoring and automation through BOTs. While maintaining multi-sector client base, we are also uniquely focused with existing solutions for Higher Education, Public Sector and Research sector clients.


Spotlight-1: AWS Cloud Portals Solution
Relevance Lab have developed a family of cloud portal solutions on top of our RLCatalyst platform. Cloud Portal solutions aim to simplify AWS consumption using self-service models with emphasis on 1-click provisioning / lifecycle management, dashboard views of budget / cost consumption and modeling personas, roles & responsibilities with a sector context.


A unique feature of the above solutions is that they promote a hybrid model of consumption wherein users can bring their own AWS accounts (consumption accounts) under the framework of our cloud portal solution and benefit from being able to consume AWS for their educational and research needs in an easy self-service model.

The solutions can be consumed as either an Enterprise or as a SaaS license. In addition, the solutions will be made available on AWS Marketplace soon.


Spotlight-2: AWS Security Governance at Scale Framework
The framework and deployment architecture uses AWS Control Tower as the foundational service and other closely aligned and native AWS products and services such as AWS Service Catalog, AWS Security Hub, AWS Budgets, etc. and addresses subject areas such as multi-account management, cost management, security, compliance & governance.

Relevance Lab can assess, design and deploy or migrate to a fully secure AWS environment that lends itself to governance at scale. To encourage clients to adopt this journey, we have launched a 10-10 Program for AWS Security Governance that provides clients with an upfront blueprint of the entire migration or deployment process and end-state architecture so that they can make an informed decision


Spotlight-3: Automated User Onboarding/Offboarding for Enterprises Use Case
Relevance Lab is a unique partner that possesses deep expertise on AWS and ITSM platforms such as ServiceNow, freshservice, JIRA Service Desk, etc. The intersection of these platforms lends itself to relevant use cases for the industry. Relevance Lab has come up with a solution for automated User onboarding & offboarding in an enterprise context. This solution brings together multiple systems in a workflow model to accomplish user onboarding and offboarding tasks for an enterprise. It includes integration across HR systems, AWS Service Catalog and other services, ITSM platforms such as ServiceNow and assisted by Relevance Lab’s RLCatalyst BOTs engine to perform an end-to-end user onboarding/offboarding orchestration in an unassisted manner.


Key Customer Use Cases and Success Stories in 2020
Relevance Lab helped customers across verticals with growing momentum on Cloud adoption in the post COVID-19 situation to rewrite their digital solutions in adopting the new touchless interactions, remote distributed workforces and strong security governance solutions for enabling frictionless business.


AWS Best Practices – Blogs, Campaigns, Technical Write-ups
The following is a collection of knowledge articles and best practices published throughout the year related to our AWS centered solutions & services.

AWS Service Management


AWS Practice Solution Campaigns

AWS Governance at Scale

AWS Workspaces

RLCatalyst Product Details

AWS Infrastructure Automation

AWS E-Learning & Research Workbench Solutions

Cloud Networking Best Practices


Summary
The momentum of Cloud adoption in 2020 is quite likely to continue and grow in the new year 2021. Relevance Lab is a trusted partner in your cloud adoption journey driven by focus on following key specializations:

  • Cloud First Approach for Workload Planning
  • Cloud Governance 360 for using AWS Cloud the Right-Way
  • Automation Led Service Delivery Management with Self Service ITSM and Cloud Portals
  • Driving Frictionless Business and AI-Driven outcomes for Digital Transformation and App Modernization

To learn more about our services and solutions listed above or engage us in consultative discussions for your AWS and other IT service needs, feel free to contact us at marketing@relevancelab.com



0

2021 Blog, Blog, command blog, Featured

AWS X-Ray is an application performance service that collects data about requests that your application processes, and provides tools to view, filter, and gain insights into that data to identify issues and opportunities for optimization. It enables a developer to create a service map that displays an application’s architecture. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs. It is compatible with microservices and serverless based applications.

The X-Ray SDK provides

  • Interceptors to add to your code to trace incoming HTTP requests
  • Client handlers to instrument AWS SDK clients that your application uses to call other AWS services
  • An HTTP client to use to instrument calls to other internal and external HTTP web services

The SDK also supports instrumenting calls to SQL databases, automatic AWS SDK client instrumentation, and other features.

Instead of sending trace data directly to X-Ray, the SDK sends JSON segment documents to a daemon process listening for UDP traffic. The X-Ray daemon buffers segments in a queue and uploads them to X-Ray in batches. The daemon is available for Linux, Windows, and macOS, and is included on AWS Elastic Beanstalk and AWS Lambda platforms.

X-Ray uses trace data from the AWS resources that power your cloud applications to generate a detailed service graph. The service graph shows the client, your front-end service, and corresponding backend services to process requests and persist data. Use the service graph to identify bottlenecks, latency spikes, and other issues to solve to improve the performance of your applications.

AWS X-Ray Analytics helps you quickly and easily understand

  • Any latency degradation or increase in error or fault rates
  • The latency experienced by customers in the 50th, 90th, and 95th percentiles
  • The root cause of the issue at hand
  • End users who are impacted, and by how much
  • Comparisons of trends based on different criteria. For example, you could understand if new deployments caused a regression

How AWS X-Ray Works
AWS X-Ray receives data from services as segments. X-Ray groups the segments that have a Common request into traces. X-Ray processes the traces to generate a service map which provides a visual depiction of the application

AWS X-Ray features

  • Simple setup
  • End-to-end tracing
  • AWS Service and Database Integrations
  • Support for Multiple Languages
  • Request Sampling
  • Service map

Benefits of Using AWS X-Ray

Review Request Behaviour
AWS X-Ray traces customers’ requests and accumulates the information generated by the individual resources and services, which makes up your application, granting you an end-to-end view on the actions and performance of your application.

Discover Application Issues
Having AWS X-Ray, you could extract insights about your application performance and finding out root causes. As AWS X-Ray is having tracing features, you can easily follow request paths to diagnose where in your application and what is creating performance issues.

Improve Application Performance
AWS X-Ray’s service maps allow you to see connection between resources and services in your application in actual time. You could simply notice where high latencies are visualizing node, occurring and edge latency distribution for services, and after that, drilling down into the different services and paths having impact on the application performance.

Ready to use with AWS
AWS X-Ray operates with Amazon EC2 Container Service, Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda. You could utilize AWS X-Ray with applications composed in Node.js, Java, and .NET, which are used on these services.

Designed for a Variety of Applications
AWS X-Ray operates for both simple and complicated applications, either in production or in development. With X-Ray, you can simply trace down the requests which are made to the applications that span various AWS Regions, AWS accounts, and Availability Zones.

Why AWS X-Ray?
Developers spend a lot of time searching through application logs, service logs, metrics, and traces to understand performance bottlenecks and to pinpoint their root causes. Correlating this information to identify its impact on end users comes with its own challenges of mining the data and performing analysis. This adds to the triaging time when using a distributed microservices architecture, where the call passes through several microservices. To address these challenges, AWS launched AWS X-Ray Analytics.

X-Ray helps you analyze and debug distributed applications, such as those built using a microservices architecture. Using X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root causes of performance issues and errors. It helps you debug and triage distributed applications wherever those applications are running, whether the architecture is serverless, containers, Amazon EC2, on-premises, or a mixture of all of these.

Relevance Lab is a specialist AWS partner and can help Organizations in implementing the monitoring and observability framework including AWS X-ray to ease the application management and help identify bugs pertaining to complex distributed workflows.

For a demo of the same, please click here

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Analytics, Blog, Featured

The Consumer Packaged Goods (CPG) Industry is one of the largest industries on the planet. From food and beverage to clothes to stationary, it is impossible to think of a moment in our lives without being touched or influenced by this sector. If there is one paradigm around which the industry revolves, regardless of the sub-sector or the geography, it is the fear of stock outs. Studies indicate that when a customer finds a product unavailable, 31% are likely to switch over to a competitor when it happens for the first time. It becomes 50% when this occurs for a second time and rises to 70% when this happens for a third time.

Historically, the panacea for this problem has been to overstock. While this reduced the risk of stock outs to a great extent, it induced a high cost for holding the inventory and increased risk of obsolescence. It also created a shortage of working capital since a part of it is always locked away in holding excess inventory. This additional cost is often passed on to the end customer. Over time, an integrated planning solution which could predict demand, supply and inventory positions became a key differentiator in the CPG industry since it helped rein in costs and become competitive in an industry which is extremely price sensitive.

Although theoretically, a planning solution should have been able to solve the inventory puzzle, practically, a lot of challenges kept limiting its efficacy. Conventional planning solutions have been built based on local planning practices. Such planning solutions have had challenges negotiating the complex demand patterns of the customers which are influenced by general consumer behaviour and also seasonal trends in the global market. As a result the excess inventory problem stays, which gets exacerbated at times due to bullwhip effect.

This is where the importance of a global integrated Production Sales Inventory (PSI) solutions comes in. But usually, this is easier said than done. Large organizations face multiple practical challenges when they attempt to implement this. Following are the typical challenges that large organizations face


  • Infrastructural Limitations
    Using conventional systems of Business Intelligence of Planning systems would require very heavy investment in infrastructure and systems. Also the results may not be proportionate to the investments made.
  • Data Silos
    PSI requires data from different departments including sales, production, and procurement/sourcing. Even if the organization has a common ERP, the processes and practices in each department might make it difficult to combine data and get insights.
    Another significant hurdle is the fact that larger organizations usually tend to have multiple ERPs for handling local transactions aligned to geographical markets. Each ERP or data source which does not talk to other systems becomes siloed. The complexities increase when the data formats and tables are incompatible, especially, when the ERPs are from different vendors.
  • Manual Effort
    Harmonizing the data from multiple systems and making them coherent involves a huge manual effort in designing, building, testing and deployment if we follow conventional mode. The prohibitive costs involved, not to mention the human effort involved is a huge challenge for most organizations.

Relevance Lab has helped multiple customers tide over the above challenges and get a faster return on their investments.

Here are the steps we follow to achieve a responsive global supply chain

  • Gather Data: Collate data from all relevant systems
    Leveraging data from as many relevant sources (both internal and external) as possible is one of the most important steps in ensuring a responsive global supply chain. The challenge of handling the huge data volume is addressed through the use of big data technologies. The data gathered is then cleansed and harmonized using SPECTRA, Relevancelab big data/analytics platform. SPECTRA can then combine the relevant data from multiple sources, and refresh the results at specified periodic intervals. One point of note here is that Master Data harmonization, that usually consumes months of effort can be significantly accelerated with the SPECTRA’s machine learning and NLP capabilities.

  • Gain Insights: Know the as-is states from intuitive visualizations
    The data pulled in from various sources can be combined to see the snapshot of inventory levels across the supply chain. SPECTRA’s built-in data models and quasi plug and play visualizations ensure that users get a quick and accurate picture of their supply chain. Starting with a bird’s eye view of the current inventory levels across various types of stocking locations and across each inventory type, the visualization capabilities of SPECTRA can be leveraged to have a granular view of the current inventory positions or backlog orders or compare sales with the forecasts. This a critical step in the overall process as this helps organizations to clearly define their problems and identify likely end states. For example, the organization could go for a deeper analysis to identify slow moving and obsolete inventory or fine tune their planning parameters.

  • Predict: Use big data to predict inventory levels
    The data from various systems can be used to predict the likely inventory levels based on service level targets, demand predictions, production and procurement information. Time series analysis is used to predict the lead time for production and procurement. Projected inventory level calculations for future days/weeks, thus calculated, is more likely to reflect the actual inventory levels since the uncertainties, both external and internal, have been well accounted for.

  • Act: Measurement and Continuous Improvement
    Inventory management is a continuous process. The above steps would provide a framework for measuring and tracking the performance of the inventory management solution and make necessary course corrections based on real time feedback.

Conclusion
Successful inventory management is one of the basic requirements for financial success for companies in the Consumer Packaged Goods Sector. There is no perfect solution to achieve this as the customer needs and environment are dynamic and the optimal solution could only be reached iteratively. Relevancelab framework to address inventory management combining deep domain experience with SPECTRA’s capabilities like NLP for faster master data management & harmonization, pre-built data models, quasi plug and play visualizations and custom algorithms offer a faster turn-around and quicker Return-on-Investment. Additionally, the comprehensive process ensures that the data is massaged and prepped for both broader and deeper analysis of the supply chain and risk in the future.

Additional references
https://www.2flow.ie/news-and-blog/solving-the-out-of-stock-problem-infographic

To learn how you can leverage ML and AI within your customer retention strategy, please reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Digital Blog, Featured

In our increasingly digitized world, companies across industries are embarking on digital transformation journeys to transform their infrastructure, application architecture and footprint to a more modern technology stack, one that allows them to be nimble and agile when it comes to maintainability, scalability, easier deployment (smaller units can be deployed frequently).

Old infrastructure and the traditional ways of building applications are inhibiting growth for large enterprises, mid-sized and small businesses. Rapid innovation is needed to rollout new business models, optimize business processes, and respond to new regulations. Business leaders and employees understand the need for this agility – everyone wants to be able to connect to their Line of Business (LOB) systems through mobile devices or remotely in a secure and efficient manner, no matter how old or new these systems are, and this is where Application Modernization comes in to picture.

A very interesting use case was shared with us by our large Financial Asset management customer. They had a legacy application, which was 15+ years old and having challenges like tightly coupled business modules, code base/solution maintainability, complexity in implementing lighter version of workflow, modular way of deploying key features, legacy technology stack based application, etc. To solve this problem we had a solid envisioning phase for future state application by considering the next generation solution architecture approach, latest technology stack, value add for the business – lighter version of workflow engine, responsive design & End–to–End (E2E) DevOps solution.

Legacy Application Modernizations/Platform Re-Engineering
Legacy application modernization projects intend to create new business value from existing, aging applications by updating or replacing them with modern technologies, features and capabilities. By migrating the legacy applications, business can include the latest functionalities that better align with where business needs transformation & success.

These initiatives are typically designed and executed with phased rollouts that will replace certain functional feature sets of the legacy application with each successive rollout, eventually evolving into a complete, new, agile, modern application that is feature-rich, flexible, configurable, scalable and maintainable in future.

Monolithic Architecture Vs Microservices Architecture – The Big Picture

Monolithic Architecture

  • Traditional way of building applications
  • An application is built as one large system and is usually one codebase
  • Application is tightly coupled and gets entangled as the application evolves
  • Difficult to isolate services for purposes such as independent scaling or code maintainability
  • Usually deployed on a set of identical servers behind a load balancer
  • Difficult to scale parts of the application selectively
  • Usually have one large code base and lack modularity. If developers community wants to update or change something, they access the same code base. So, they make changes in the whole stack at once

The following diagram depicts an application built using Monolithic Architecture

Microservices Architecture

  • Modern way of building applications
  • A microservice application typically consists of many services
  • Each service has multiple runtime instances
  • Each service instance needs to be configured, deployed, scaled, and monitored

Microservices Architecture – Tenets
The Microservices Architecture breaks the Monolithic application into a collection of smaller, independent units. Some of the salient features of Microservices are

  • Highly maintainable and testable
  • Autonomous and Loosely coupled
  • Independently deployable
  • Independently scalable
  • Organized around domain or business capabilities (context boundaries)
  • Owned by a small team
  • Owning their related domain data model and domain logic (sovereignty and decentralized data management) and could be based on different data storage technologies (SQL, NoSQL) and different programming languages

The following diagram depicts an enterprise application built using Microservices Architecture by leveraging Microsoft technology stack.


Benefits of Microservices Architecture

  • Easier Development & Deployment – Enables frequent deployment of smaller units. The microservices architecture enables the rapid, frequent, and reliable delivery of large, complex applications
  • Technology adoption/evolution – Enables an organization to evolve its technology stack
  • Process Isolation/Fault tolerance – Each service runs in its own process and communicates with other processes using standard protocols such as HTTP/HTTPS, Web Sockets, AMQP (Advanced Message Queuing Protocol)

Today the Enterprise customers across the globe like eBay, GE Healthcare, Samsung, BMW, Boeing, etc. has been adopted Microsoft Azure platform for developing their Digital solutions. We at Relevance Lab also delivered numerous Digital transformational initiatives to our global customers by leveraging Azure platform and Agile scrum delivery methodology.

The following diagram depicts an enterprise solution development life cycle leveraging Azure platform and it’s various components, which enables Agile scrum methodology for the E2E solution delivery


Conclusion
Monolithic Architecture does have its strengths like development and deployment simplicity, easier debugging and testing and fewer cross-cutting concerns and can be a good choice for certain situations, typically for smaller applications.However, for larger, business critical applications, the monolithic approach can bring up challenges like technological barriers, scalability, tight coupling (rigidity) and hence makes it difficult to make changes, and development teams find them difficult to understand.

By adopting Microservices architecture and Microsoft Azure Platform based solutions business could leverage below benefits

  • Easier, rapid development of enterprise solutions
  • Global team could be distributed to focus certain services development of the system
  • Organized around business capabilities, rapid infrastructure provisioning & application development – Technology team will be focused not just on technologies but also acquires business domain knowledge, organized around business capabilities and cloud infrastructure provisioning/ capacity planning knowledge
  • Offers modularizations for large enterprise applications, increases productivity and helps distributed team to focus on their specific modules and deliver them in speed and scale them based on the business growth

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, AWS Platform, Blog, Featured, Research Gateway

While there is rapid momentum for every enterprise in the world in consuming more Cloud Assets and Services, there is still lack of maturity in adopting an “Automation-First” approach to establish Self-Service models for Cloud consumptions due to fear of uncontrolled costs, security & governance risks and lack of standardized Service Catalogs of pre-approved Assets & Service Requests from Central IT groups. Lack of delegation and self-service has a direct impact on speed of innovation and productivity with higher operations costs.

Working closely with AWS Partnership we have now created a flexible platform for driving faster adoption of Self-Service Cloud Portals. The primary needs for such a Self-Service Cloud Portal are the following.

  • Adherence to Enterprise IT Standards
    • Common architecture
    • Governance and Cost Management
    • Deployment and license management
    • Identity and access management
  • Common Integration Architecture with existing platforms on ITSM and Cloud
    • Support for ServiceNow, Jira, Freshservice and Standard Cloud platforms like AWS
  • Ability to add specific custom functionality in the context of Enterprise Business needs
    • The flexibility to add business specific functionality is key to unlocking the power of self-service models outside the standard interfaces already provided by ITSM and Cloud platforms

A common way of identifying the need for a Self-Service Cloud portal is based on following needs.

  • Does your enterprise already have any Self-Service Portals?
  • Do you have a large user base internally or with external users requiring access to Cloud resources?
  • Does your internal IT have the bandwidth and expertise to manage current workloads without impacting end user response time expectations?
  • Does your enterprise have a proper security governance model for Cloud management?
  • Are there significant productivity gains by empowering end users with Self-Service models?

Working with AWS partnership and with our existing customer we see a growing need for Self-Service Cloud Portals in 2020 predominantly centred around two models.

  • Enterprises with existing ITSM investments and need to leverage that for extending to Cloud Management
  • Enterprises extending needs outside enterprise users with custom Cloud Portals

The roadmap to Self-Service Cloud portals is specific to every enterprise needs and needs to leverage the existing adoption and maturity of Cloud and ITSM platforms as explained below. With Relevance Lab RLCatalyst products we help enterprises achieve the maturity in a cost effective and expedited manner.


Examples of Self-Service Cloud Portals



Standard Needs Platform Benefits
Look-n-Feel of Modern Self-Service Portals Professional and responsive UI Design with multiple themes available, customizations allowed
Standards based Architecture & Governance Tightly Built On AWS products and AWS Well Architected with pre-built Reference Architecture based Products
Pre-built Minimum Viable Product Needs 80-20 Model – Pre-built vs Customizations based on key components of core functionality
Proprietary vs Open Source? Open-source foundation with source code made available built on MEAN Stack
Access Control, Security and Governance Standard Options Pre-built, easy extensions (SAML Based). Deployed with enterprise grade security and compliances
Rich Standard Pre-Build Catalog of Assets and Services Comes pre-built with 100+ catalog items covering all standard Asset and Services needs catering to 50% of any enterprise infrastructure, applications and service delivery needs


Explained below is a sample AWS Self-Service Cloud for driving Scientific Research.



Getting started
To make is easier for enterprises for experiencing the power of Self-Service Cloud Portals we are offering two options based on enterprise needs.

  • Hosted SAAS offering of using our Multi-tenant Cloud Portal with ability to connect to your existing Cloud Accounts and Service Catalogs
  • Self-Hosted RLCatalyst Cloud Portal product with option to engage us for professional services on customizations, training, initial setup & onboarding needs

Pricing for the SAAS offering is based on user based monthly subscription while for self-hosting model an enterprise support model pricing is available for the open source solution that allows enterprises the flexibility to use this solution without proprietary lock-ins.

The typical steps to get started are very simple covering the following.

  • Setup an organization and business units or projects aligned with your Cloud Accounts for easy billing and access control tracking
  • Setup users and roles
  • Setup Budgets and controls
  • Setup standard catalog of items for users to order
  • With the above enterprises are up to speed to use Self-Service Cloud Portals in less than 1-Day with inbuilt controls for tracking and compliance

Summary
Cloud Portals for Self-Service is a growing need in 2020 and we see the momentum continuing for next year as well. Different market segments have different needs for Self-Service Cloud portals as explained in this Blog.


  • Scientific Research community is interested in a Research Gateway Solution
  • University IT looks for a University in a Box Self Service Cloud
  • Enterprises using ServiceNow want to extend the internal Self Service Portals
  • Enterprises are also developing Hybrid Cloud Orchestration Portals
  • Enterprises looking at building AIOps Portal needs monitoring, automation and service management
  • Enabling Virtual Training Labs with User and Workspace onboarding
  • Building an integrated Command Centre requires an Intelligent Monitoring portal
  • Enterprise Intelligent Automation Portal with ServiceNow Connector

We provide pre-build solutions for Self-Service Cloud Portals and a base platform that can be easily extended to add new functionality for customization and integration. A number of large enterprises and universities are leveraging our Self Service Cloud portal solutions using both existing ITSM tools (Servicenow, Jira, Freshservice) and RLCatalyst products.

To learn more about using AWS Cloud or ITSM solutions for Self-Service Cloud portals contact marketing@relevancelab.com



0

2020 Blog, Analytics, Blog, Featured

If you are a business with a digital product or a subscription model, then you are already familiar with this key metric – “Customer Churn”.

Customer Churn is the percentage of customers who stopped using your product during a given period. This is a critical metric, as it not only reflects customer satisfaction but it also has a big impact on your bottom line. A common rule of the thumb is that it costs 6-7 times to get a new customer versus keeping the customers you already have. In addition, existing customers are expected to spend more over time, and satisfied customers lead to additional sales through referrals. Market studies show that increasing customer retention by small percentage can boost revenues significantly. Further research reveals that most professionals consider that Churn is just as or more important a metric than new customer acquisitions.

Subscription businesses strongly believe customers cancel for reasons that could be managed or fixed. “Customer Retention” is the set of strategies and actions that a company follows to keep existing customers from churning. Employing a data-driven customer retention strategy, and leveraging the power of big data and machine learning, offer significant opportunities for businesses to create a competitive advantage versus their peers that don’t.

Relevance Lab (RL) recently helped a large US based Digital learning company benefit from a detailed churn analysis of its subscription customers, by leveraging the RL SPECTRA platform with machine learning. The portfolio included several digital subscription products used in school educational curriculums which are renewed annually during the start of the school calendar year. Each year, there were several customers that did not renew their licenses and importantly, this happened at the end of the subscription cycle; typically too late for the sales team to respond effectively.

Here are the steps that the organisation took along the churn management journey.



  • Gather multiple data points to generate better insights
    As with any analysis, to figure out where your churn is coming from, you need to keep track of the right data. Especially with machine learning initiatives, the algorithms depend on large quantities of raw data to learn complex patterns. A sample list of data attributes could include online interactions with the product, clicks, page views, test scores, incident reports, payment information, etc, it could also include unstructured data elements such as reports, reviews and blog posts.

    In this particular example, the data was pulled from four different databases which contained the product platform data for our relevant geography. Data collected included product features, sales and renewal numbers, as well as student product usage, test performance statistics etc, going back to the past 4 years.

    Next, the data was cleansed to remove trial licenses, dummy tests etc, and to normalize missing data. Finally, the data was harmonized to bring all the information into a consolidated format.

    All the above pipelines were established using the SPECTRA ETL process. Now there was a fully functional data setup with cleaned data ordered in tables, to be used in the machine learning algorithms for churn prediction.

  • Predictive analytics use Machine Learning to know who is at risk
    Once you have the data, you are now ready to work on the core of your analysis, to understand where the risk of churn is coming from, and hence identify the opportunities for strengthening your customer relationships. Machine learning techniques are especially suited to this task, as they can churn massive amounts of historical data to learn about customer behavior, and then use this training to make predictions about important outcomes such as retention.

    On our assignment, the RL team tried out a number of machine learning models built-in within SPECTRA to predict the churn and zeroed in on a random forest model. This method is very effective when using inconsistent data sets, where the system can handle differences in behavior very effectively by creating a large number of random trees. In the end, the system provided a predicted rating for each customer to drop out of the system and highlighted the ones most at risk.

  • Define the most valuable customers
    Parallel to identifying customers at risk of churn, data can also be used to segment customers into different groups to identify how each group interacts with your product. In addition, data regarding frequency of purchase, purchase value, product coverage helps you to quickly identify which type of customers are driving the most revenue, versus customers which are a poor fit for your product. This will then allow you to adopt different communication and servicing strategies for each group, and to retain your most valuable customers.

    By combining our machine learning model output with the segmentation exercise, the result was a dynamic dashboard, which could be sorted/filtered by different criteria such as customer size and geographical location. This provided the opportunity to highlight the customers which were at the highest risk, from the joint viewpoint of attrition and revenue loss. This in turn enabled the client to effectively utilize sales team resources in the best possible manner.

  • Engage with the customers
    Now that you have identified your top customers who you are at risk of losing, the next step is to actively engage with them, to incentivise the customers to stay with you, by being able to help the customer achieve real value out of your product.

    The nature of engagement could depend on the stage the customer is in the relationship. Is the customer in the early stage of product adoption? This could then point to the fact that the customer is unable to get set up with your product. Here, you have to make sure that the customer has access to enough training material, maybe the customer requires additional onboarding support.

    If the customer is in the middle stage, it could be that the customer is not realizing enough business value out of your product. Here, you need to check in with your customer, to see whether they are making enough progress towards their goals. If the customer is in late stage, it is possible that they are looking at competitor offerings, or they were frustrated with bugs, and hence the discussion would need to be shaped accordingly.

    To tailor the nature of your conversation, you need to take a close look at the customer product interaction metrics. In our example, all the customer usage patterns, test performance, books read, word literacy, etc, were collected and presented as a dashboard, as a single point of reference for the sales and marketing team to easily review customer engagement levels, to be able to connect constructively with the customer management.


Conclusion
If you are looking at reducing your customer churn and improving customer retention, it all comes down to predicting customers at risk of churn, analyzing the reasons behind churn, and then taking appropriate action. Machine learning based models are of particular help here, as they can take into account hundreds and even thousands of different factors, which may not be obvious or even possible to track for a human analyst. In this example, the SPECTRA platform helped the client sales team to predict the customers’ inclination for renewal of the specific learning product with 92% accuracy.

Additional references
Research from Bain and Co. shows that increasing customer retention by even 5% boosts revenues by 25% – 95%
Reportfrom Brightback reveals Churn is just as or more important a metric than new customer acquisitions

To learn how you can leverage machine learning and AI within your customer retention strategy, please reach out to marketing@relevancelab.com



0

PREVIOUS POSTSPage 1 of 4NO NEW POSTS