Your address will show here +12 34 56 78
2020 Blog, Blog, Feature Blog, Featured

While there is rapid momentum for every enterprise in the world in consuming more Cloud Assets and Services, there is still lack of maturity in adopting an “Automation-First” approach to establish Self-Service models for Cloud consumptions due to fear of uncontrolled costs, security & governance risks and lack of standardized Service Catalogs of pre-approved Assets & Service Requests from Central IT groups. Lack of delegation and self-service has a direct impact on speed of innovation and productivity with higher operations costs.

Working closely with AWS Partnership we have now created a flexible platform for driving faster adoption of Self-Service Cloud Portals. The primary needs for such a Self-Service Cloud Portal are the following.

  • Adherence to Enterprise IT Standards
    • Common architecture
    • Governance and Cost Management
    • Deployment and license management
    • Identity and access management
  • Common Integration Architecture with existing platforms on ITSM and Cloud
    • Support for ServiceNow, Jira, Freshservice and Standard Cloud platforms like AWS
  • Ability to add specific custom functionality in the context of Enterprise Business needs
    • The flexibility to add business specific functionality is key to unlocking the power of self-service models outside the standard interfaces already provided by ITSM and Cloud platforms

A common way of identifying the need for a Self-Service Cloud portal is based on following needs.

  • Does your enterprise already have any Self-Service Portals?
  • Do you have a large user base internally or with external users requiring access to Cloud resources?
  • Does your internal IT have the bandwidth and expertise to manage current workloads without impacting end user response time expectations?
  • Does your enterprise have a proper security governance model for Cloud management?
  • Are there significant productivity gains by empowering end users with Self-Service models?

Working with AWS partnership and with our existing customer we see a growing need for Self-Service Cloud Portals in 2020 predominantly centred around two models.

  • Enterprises with existing ITSM investments and need to leverage that for extending to Cloud Management
  • Enterprises extending needs outside enterprise users with custom Cloud Portals

The roadmap to Self-Service Cloud portals is specific to every enterprise needs and needs to leverage the existing adoption and maturity of Cloud and ITSM platforms as explained below. With Relevance Lab RLCatalyst products we help enterprises achieve the maturity in a cost effective and expedited manner.


Examples of Self-Service Cloud Portals



Standard Needs Platform Benefits
Look-n-Feel of Modern Self-Service Portals Professional and responsive UI Design with multiple themes available, customizations allowed
Standards based Architecture & Governance Tightly Built On AWS products and AWS Well Architected with pre-built Reference Architecture based Products
Pre-built Minimum Viable Product Needs 80-20 Model – Pre-built vs Customizations based on key components of core functionality
Proprietary vs Open Source? Open-source foundation with source code made available built on MEAN Stack
Access Control, Security and Governance Standard Options Pre-built, easy extensions (SAML Based). Deployed with enterprise grade security and compliances
Rich Standard Pre-Build Catalog of Assets and Services Comes pre-built with 100+ catalog items covering all standard Asset and Services needs catering to 50% of any enterprise infrastructure, applications and service delivery needs


Explained below is a sample AWS Self-Service Cloud for driving Scientific Research.



Getting started
To make is easier for enterprises for experiencing the power of Self-Service Cloud Portals we are offering two options based on enterprise needs.

  • Hosted SAAS offering of using our Multi-tenant Cloud Portal with ability to connect to your existing Cloud Accounts and Service Catalogs
  • Self-Hosted RLCatalyst Cloud Portal product with option to engage us for professional services on customizations, training, initial setup & onboarding needs

Pricing for the SAAS offering is based on user based monthly subscription while for self-hosting model an enterprise support model pricing is available for the open source solution that allows enterprises the flexibility to use this solution without proprietary lock-ins.

The typical steps to get started are very simple covering the following.

  • Setup an organization and business units or projects aligned with your Cloud Accounts for easy billing and access control tracking
  • Setup users and roles
  • Setup Budgets and controls
  • Setup standard catalog of items for users to order
  • With the above enterprises are up to speed to use Self-Service Cloud Portals in less than 1-Day with inbuilt controls for tracking and compliance

Summary
Cloud Portals for Self-Service is a growing need in 2020 and we see the momentum continuing for next year as well. Different market segments have different needs for Self-Service Cloud portals as explained in this Blog.


  • Scientific Research community is interested in a Research Gateway Solution
  • University IT looks for a University in a Box Self Service Cloud
  • Enterprises using ServiceNow want to extend the internal Self Service Portals
  • Enterprises are also developing Hybrid Cloud Orchestration Portals
  • Enterprises looking at building AIOps Portal needs monitoring, automation and service management
  • Enabling Virtual Training Labs with User and Workspace onboarding
  • Building an integrated Command Centre requires an Intelligent Monitoring portal
  • Enterprise Intelligent Automation Portal with ServiceNow Connector

We provide pre-build solutions for Self-Service Cloud Portals and a base platform that can be easily extended to add new functionality for customization and integration. A number of large enterprises and universities are leveraging our Self Service Cloud portal solutions using both existing ITSM tools (Servicenow, Jira, Freshservice) and RLCatalyst products.

To learn more about using AWS Cloud or ITSM solutions for Self-Service Cloud portals contact marketing@relevancelab.com



0

2020 Blog, Analytics, Blog, Featured

If you are a business with a digital product or a subscription model, then you are already familiar with this key metric – “Customer Churn”.

Customer Churn is the percentage of customers who stopped using your product during a given period. This is a critical metric, as it not only reflects customer satisfaction but it also has a big impact on your bottom line. A common rule of the thumb is that it costs 6-7 times to get a new customer versus keeping the customers you already have. In addition, existing customers are expected to spend more over time, and satisfied customers lead to additional sales through referrals. Market studies show that increasing customer retention by small percentage can boost revenues significantly. Further research reveals that most professionals consider that Churn is just as or more important a metric than new customer acquisitions.

Subscription businesses strongly believe customers cancel for reasons that could be managed or fixed. “Customer Retention” is the set of strategies and actions that a company follows to keep existing customers from churning. Employing a data-driven customer retention strategy, and leveraging the power of big data and machine learning, offer significant opportunities for businesses to create a competitive advantage versus their peers that don’t.

Relevance Lab (RL) recently helped a large US based Digital learning company benefit from a detailed churn analysis of its subscription customers, by leveraging the RL SPECTRA platform with machine learning. The portfolio included several digital subscription products used in school educational curriculums which are renewed annually during the start of the school calendar year. Each year, there were several customers that did not renew their licenses and importantly, this happened at the end of the subscription cycle; typically too late for the sales team to respond effectively.

Here are the steps that the organisation took along the churn management journey.



  • Gather multiple data points to generate better insights
    As with any analysis, to figure out where your churn is coming from, you need to keep track of the right data. Especially with machine learning initiatives, the algorithms depend on large quantities of raw data to learn complex patterns. A sample list of data attributes could include online interactions with the product, clicks, page views, test scores, incident reports, payment information, etc, it could also include unstructured data elements such as reports, reviews and blog posts.

    In this particular example, the data was pulled from four different databases which contained the product platform data for our relevant geography. Data collected included product features, sales and renewal numbers, as well as student product usage, test performance statistics etc, going back to the past 4 years.

    Next, the data was cleansed to remove trial licenses, dummy tests etc, and to normalize missing data. Finally, the data was harmonized to bring all the information into a consolidated format.

    All the above pipelines were established using the SPECTRA ETL process. Now there was a fully functional data setup with cleaned data ordered in tables, to be used in the machine learning algorithms for churn prediction.

  • Predictive analytics use Machine Learning to know who is at risk
    Once you have the data, you are now ready to work on the core of your analysis, to understand where the risk of churn is coming from, and hence identify the opportunities for strengthening your customer relationships. Machine learning techniques are especially suited to this task, as they can churn massive amounts of historical data to learn about customer behavior, and then use this training to make predictions about important outcomes such as retention.

    On our assignment, the RL team tried out a number of machine learning models built-in within SPECTRA to predict the churn and zeroed in on a random forest model. This method is very effective when using inconsistent data sets, where the system can handle differences in behavior very effectively by creating a large number of random trees. In the end, the system provided a predicted rating for each customer to drop out of the system and highlighted the ones most at risk.

  • Define the most valuable customers
    Parallel to identifying customers at risk of churn, data can also be used to segment customers into different groups to identify how each group interacts with your product. In addition, data regarding frequency of purchase, purchase value, product coverage helps you to quickly identify which type of customers are driving the most revenue, versus customers which are a poor fit for your product. This will then allow you to adopt different communication and servicing strategies for each group, and to retain your most valuable customers.

    By combining our machine learning model output with the segmentation exercise, the result was a dynamic dashboard, which could be sorted/filtered by different criteria such as customer size and geographical location. This provided the opportunity to highlight the customers which were at the highest risk, from the joint viewpoint of attrition and revenue loss. This in turn enabled the client to effectively utilize sales team resources in the best possible manner.

  • Engage with the customers
    Now that you have identified your top customers who you are at risk of losing, the next step is to actively engage with them, to incentivise the customers to stay with you, by being able to help the customer achieve real value out of your product.

    The nature of engagement could depend on the stage the customer is in the relationship. Is the customer in the early stage of product adoption? This could then point to the fact that the customer is unable to get set up with your product. Here, you have to make sure that the customer has access to enough training material, maybe the customer requires additional onboarding support.

    If the customer is in the middle stage, it could be that the customer is not realizing enough business value out of your product. Here, you need to check in with your customer, to see whether they are making enough progress towards their goals. If the customer is in late stage, it is possible that they are looking at competitor offerings, or they were frustrated with bugs, and hence the discussion would need to be shaped accordingly.

    To tailor the nature of your conversation, you need to take a close look at the customer product interaction metrics. In our example, all the customer usage patterns, test performance, books read, word literacy, etc, were collected and presented as a dashboard, as a single point of reference for the sales and marketing team to easily review customer engagement levels, to be able to connect constructively with the customer management.


Conclusion
If you are looking at reducing your customer churn and improving customer retention, it all comes down to predicting customers at risk of churn, analyzing the reasons behind churn, and then taking appropriate action. Machine learning based models are of particular help here, as they can take into account hundreds and even thousands of different factors, which may not be obvious or even possible to track for a human analyst. In this example, the SPECTRA platform helped the client sales team to predict the customers’ inclination for renewal of the specific learning product with 92% accuracy.

Additional references
Research from Bain and Co. shows that increasing customer retention by even 5% boosts revenues by 25% – 95%
Reportfrom Brightback reveals Churn is just as or more important a metric than new customer acquisitions

To learn how you can leverage machine learning and AI within your customer retention strategy, please reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Feature Blog, Featured

As universities deal with the challenging situation in 2020 with remote assets, workforce and students, there is a need to make education frictionless by leveraging cloud based solutions in a pre-packaged model. Working closely with the AWS partnership in trying to make Digital Learning frictionless, Relevance Lab is bringing a unique new concept to the market of University in a Box, that extends a self-contained Cloud Portal with basic applications to power the needs of a university. This new, radical and innovative concept is based on the idea of a school, college and university going from zero (no AWS account) to cloud native in hours. This enables the Cloud “Mission with Speed” with a mature, secure and comprehensive adoption very fast.

A typical university starting on their cloud journey needs a Self-service interactive interface with user logins, tracking and offering the deployed products, provide actions for connectivity after assets are deployed, ability to have lifecycle interactions in UI of Cloud Portal with no need to go to the AWS Console and with a comprehensive view of cost and budgets tracking.

The key building blocks for University In A Box comprise the following

  • University Catalog – Cloud Formation Templates useful to Higher Education packaged as Service Catalog Products
  • Self-Service Cloud Portal for University IT users to order items with security, governance and budget tracking
  • Easy onboarding model to get started with a hosted option or self-managed instances of Cloud Portal

Leverage existing investments in AWS and standard products the foundational pieces includes a Portfolio of useful software and architectures often used by universities.

  • Deploy Control Tower
  • Deploy GuardDuty
  • Deploy Security Hub
  • Deploy VPC + VPN
  • Deploy AD Extension
  • Deploy Web Applications SSO, Shibboleth, Drupal
  • Deploy FSx File Server
  • Deploy S3 Buckets for Backup Software
  • Deploy HIPAA workload
  • Deploy Other solutions as needed, Workspaces, Duo, Appstream, etc
  • WordPress Reference Architecture
  • Drupal Reference Architecture
  • Moodle Reference Architecture
  • Shibboleth Reference Architecture




How to set up and use University in a Box?
The RLCatalyst Cloud Portal solution enables a University with no existing Cloud to deploy a self-service model for internal IT and consume standard applications seamlessly.


Steps for University Specific Setup Time Taken (Approx)
A new University wants to enable core systems on AWS Cloud and the Root account is created 0.5 Hours
Launch Control Tower and Create Core OU & University OU 1.5 Hours
User and Access Management, Account Creation, Budget Enablement 1 Hour
Network Design of the University Landing Zone (Creation + Configuration) 1.5 Hours
Provision of basic assets (Infra & Applications ) from the standard catalog 1 Hour
Enable Security and Governance (Includes VA, PM, Security Hub) 1.5 Hours
User Training and Handover 1 Hours

The following diagram explains the deployment architecture of the solution.



University Users, Roles and Organization planning
Planning for university users, roles and organizations requires mapping to existing departments, IT and non-IT roles and empowering users for self-service without compromising on security or governance. This can vary between organizations but common patterns are encountered as explained below.

  • Common Delegation use cases for University IT
    • Delegate a product from a Lead Architect to Helpdesk, or a less skilled co-worker
    • Delegate a product from Lead Architect or Central IT, to another IT group, DBA team, Networking Team, Analytics Team
    • Delegate a product to another University Department – Academic, Video, etc
    • Delegate a product to a researcher or faculty member


Setup planning considerations on deployment and onboarding


          Hosting Options
        • Option:1 – Dedicated Instance per Customer
        • Option:2 – Hosted Model, Customer brings their AWS account
        • Option:3 – Hosted Model, RL (Relevance Lab) provides a new AWS account
        • Initial Catalog Setup
        • Option:1 – Customer has existing Service Catalog
        • Option:2 – A default Service Catalog items are loaded from a standard library
        • Option:3 – Combination of above
        • Optimizing Setup parameters and Catalog binding for ease of use
        • Option:1 – Customer fills up details based on best practices and templates provided
        • Option:2 – RL sets up the initial configuration based on existing parameters
        • Option:3 – RL as part of new setup, creates an OU, new account and associated parameters
        • Additional Setup considerations
        • DNS mapping for Cloud Portal
        • Authentication – Default Cognito with SAML integration available
        • Mapping users to roles, organizations/projects/budgets


        • Standard Catalog for University in a Box leverages AWS provided standard architecture best practices
          The basic setup leverages AWS Well Architected framework extensively and builds on AWS Reference Architecture as detailed below. Sharing a sample Products Preview List based on AWS Provided University Catalog under Open Source Program.


          University Catalog Portfolio Portfolio of useful software and architectures often used by colleges and universities.
          WordPress Product with Reference Architecture This Quick Start deploys WordPress. WordPress is a web publishing platform for building blogs and websites. It can be customized via a wide selection of themes, extensions, and plugins. The Quick Start includes AWS Cloud Formation templates and a guide that provides step-by-step instructions to help you get the most out of your deployment. This reference architecture provides a set of YAML templates for deploying WordPress on AWS using Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), Auto Scaling, Elastic Load Balancing (Application Load Balancer), Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache, Amazon Elastic File System (Amazon EFS), Amazon CloudFront, Amazon Route 53, Amazon Certificate Manager (Amazon ACM) with AWS Cloud Formation.
          Scale Out Computing Product Amazon Web Services (AWS) enables data scientists and engineers to manage their scale-out workloads such as high-performance computing (HPC) and deep learning training, without having extensive cloud experience. The Scale-Out Computing on AWS solution helps customers more easily deploy and operate a multiuser environment for computationally intensive workflows such as Computer-Aided Engineering (CAE). The solution features a large selection of compute resources, a fast network backbone, unlimited storage, and budget and cost management directly integrated within AWS. This solution also deploys a user interface (UI) with cloud workstations, file management, and automation tools that enable you to create your own queues, scheduler resources, Amazon Machine Images (AMIs), and management functions for user and group permissions. This solution is designed to be a production ready reference implementation you can use as a starting point for deploying an AWS environment to run scale-out workloads, enabling users to focus on running simulations designed to solve complex computational problems. For example, with the unlimited storage capacity provided by Amazon Elastic File System (Amazon EFS), users won’t run out of space for project input and output files. Additionally, you can integrate your existing LDAP directory with Amazon Cognito to enable users to seamlessly authenticate and run jobs on AWS.
          Drupal Reference Architecture Drupal is an open-source, content management platform written in the PHP server-side scripting language. Drupal provides a backend framework for many enterprise websites. Deploying Drupal on AWS makes it easy to use AWS services to further enhance the performance and extend functionality of your content management framework. This reference architecture provides a set of YAML templates for deploying Drupal on AWS using Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), Auto Scaling, Elastic Load Balancing (Application Load Balancer), Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache, Amazon Elastic File System (Amazon EFS), Amazon CloudFront, Amazon Route 53, Amazon Certificate Manager (Amazon ACM) with AWS Cloud Formation.
          Moodle Reference Architecture Moodle is a learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalised learning environments. This repository consists of a set of nested templates which deploy a highly available, elastic, and scalable Moodle environment on AWS. Moodle is a learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalized learning environments. This reference architecture provides a set of YAML templates for deploying Moodle on AWS using Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), Auto Scaling, Elastic Load Balancing (Application Load Balancer), Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache, Amazon Elastic File System (Amazon EFS), Amazon CloudFront, Amazon Route 53, Amazon Certificate Manager (Amazon ACM) with AWS Cloud Formation. This architecture may be overkill for many Moodle deployments, however the templates can be run individually and/or modified to deploy a subset of the architecture that fits your needs.
          Shibboleth Reference Architecture with EC2 This Shibboleth IdP reference architecture will deploy a fully functional, scalable, and containerized Shibboleth IdP. This reference architecture includes rotation of IdP sealer keys, utilizing AWS Secrets Manager and AWS Lambda. In addition, the certificates that are part of the IdP as well as some of the LDAP settings (including the username/password) are stored in AWS Secrets Manager. This project is intended to be a starting point for getting the Shibboleth IdP up and running quickly and easily on AWS and provide the foundation to build a production ready deployment around. Be aware that if you do delete the stack, it will delete your CodeCommit repository so your customizations will be lost. Therefore, if you intend to use this for production, it would be a good idea to make a copy of the repo and host it in your own account and take precautions to safeguard your changes.
          REDCap on AWS Cloud Formation This repository contains AWS Cloud Formation templates to automatically deploy a REDCap environment that adheres to AWS architectural best practices. In order to use this automation, you must supply your own copy of the REDCap source files. These are available for qualified entities at projectredcap.org. Once you have downloaded your source files then you can follow the below instructions for deployment. In their own words – REDCap is a secure web application for building and managing online surveys and databases. While REDCap can be used to collect virtually any type of data,including 21 CFR Part 11, FISMA, and HIPAA-compliant environments, it is specifically geared to support online or offline data capture for research studies and operations.


          Summary
          University in a Box is a powerful example of a specific business problem solved with leverage of Cloud integrated with existing customer specific use cases and easy deployment options to save time, money and achieve quick maturity.

          For Universities, colleges and schools trying to use AWS Cloud infrastructure, applications and self-service models the solution can bring significant cost, effort and compliance benefits to help them focus on “Driving Effective Learning” than worrying about enabling cloud infrastructure, basic day to day applications and delegation of tasks to achieve scale. With a combination of pre-built solution and a managed services model to handhold customers with a full lifecycle of development, enhancement and support services, Relevance Lab can be your trusted partner for digital learning enablement.

          For demo video, please click here.

          To learn more about this solution or participate in using the same for your internal needs feel free to contact marketing@relevancelab.com



          0

          2020 Blog, Blog, Featured

          With the growing need for cloud adoption from various enterprises, there is a need to move end-user computing workload and traditional data center capacity to the cloud. Relevance Lab is working with AWS partner groups to simplify the cloud adoption process and bring in best practices for the entire lifecycle of Plan-Build-Run on the cloud. Following is the suggested blueprint for cloud adoption and moving new workload on to the cloud.


          • CloudEndure to enable automated Cloud Migration
          • AWS Control Tower is used to set up and govern a new, secure multi-account AWS environment
          • AWS Security, Identity and Compliance
          • AWS Service Management Connector for ServiceNow with Service Catalog management
          • AWS Systems Manager for Operational Insights
          • RLCatalyst Intelligent Automation

          As part of our own organization’s experience to adopt AWS for both our workspace and server needs, we have followed the following process to cater to needs, of multiple organization roles.

          Since we already had an AWS Master account but did not use AWS Control tower initially, the steps followed were as follows.


          • Setup & launch AWS Control Tower in our Master Account and build multiple Custom OUs (Organizational Units) & corresponding accounts using account factory
          • Use CloudEndure to migrate existing workloads to the new organizations under Control Tower
          • For two different organizational units, there is a need to publish separate service catalogs and access to the catalogs controlled by User Roles defined in AD integrated with ServiceNow. Based on the setup only approved users can order items relevant to their needs
          • Used AWS Service Management Connector to publish the catalogs and integrate with AWS resources
          • Implementation of RLCatalyst BOTs Automation for 1-Click provisioning
          • Different guardrails for workload being provisioned for AWS Workspaces and AWS Server Assets based on organization needs
          • Management of AWS server assets by AWS Systems Manager
          • Mature ITSM processes based on ServiceNow
          • Proactive monitoring of workspaces and servers for any incidents using RLCatalyst Command Centre

          Based on our internal experience in adopting full-lifecycle of Plan-Build-Run use cases, it is evident that multiple solutions from AWS integrated with ServiceNow and automated with RLCatalyst product provides a reusable blueprint for intelligent and automated cloud adoption. Answering the following quick questions can get your Cloud adoption jumpstarted.


          • List down your desktop assets and server assets to be migrated to the cloud with an underlying OS, third party software and applications
          • Designing your AWS Landing zone with security considerations between public- facing and private facing assets
          • Designing your networking elements between your organization’s business unit segmentation of assets and different environments needed for development, testing and production
          • List down your cloud cost segmentation and governance needs based on which a multi-organization setup can be designed upfront, and granular asset tags may be implemented
          • Capacity planning and use of Reserved Instances for Cost optimization
          • User Management and Identity management needs with possible integration to existing Microsoft AD infrastructure (On-Cloud or On-Prem) and Single Sign-On
          • Capture the needs from the IT department to provide the organization with Self-Service Portals to be able to Order Assets and Services in a frictionless manner with automated fulfilment using BOTs
          • The use of Systems Manager, Runbook design & automation and Command Center are used to proactively monitor any critical assets and applications to manage incidents efficiently
          • Ability to provision and deprovision assets on-demand with automated templates
          • Automation of User Onboarding and Off-boarding
          • ITSM Service management with Change, Configuration management database, Asset Tracking and SecOps
          • Disaster Recovery strategy and internal assessments for readiness
          • Cloud Security, Vulnerability testing, Ongoing patch management lifecycle and GRC
          • DevOps adoption for higher velocity of achieving Continuous Integration and Continuous Deliveries

          Most organizations moving to cloud is a competency discovery process which lacks best practices and a maturity model. A better approach is to use a solid framework of technology, people and processes to make your cloud adoption frictionless. Relevance Lab with its pre-built solution in partnership with AWS and ServiceNow can help enterprises adopt cloud faster.

          For more details, please feel free to reach out to marketing@relevancelab.com


          0

          2020 Blog, Blog, Digital Blog, Featured
          Working with a large enterprise customer supporting B2B and B2C business we leveraged Shopify to launch fully functional e-commerce stores enabling new digital channels in a very short window. Post Covid-19 pandemic disrupting existing business and customer reach, large companies had to quickly realign their digital channels and supply chains to deal with disruption and changes in the consumer behaviour. Businesses needed to have a frictionless approach to enable new digital channels, markets and products to reach out in a touchless manner while rewiring their backend fulfilment systems to deal with the supply chain disruptions. Relevance Lab worked closely with our customers during these challenging times to bring in necessary changes of empowering e-commerce, enterprise integrations, and supply chain insights helping create and maintain business continuity with a new environment.

          The existing customer had invested in a full fledged but heavy e-commerce platform that was slow and costly to change. With Shopify we quickly enabled them to achieve setting up a fully functional e-commerce store in Canada with standard integrations with region specific context and positive revenue impacts.

          It all boiled down to identifying an e-commerce platform which

          • Is easy and fast to set-up
          • Is secure and scalable
          • Incur least total cost of ownership
          • Provides the convenience to shop on multiple devices
          • Customizable as per requirement

          We have configured the Shopify built-in theme to meet branding requirements and purchase workflows. Payment was enabled through multiple channels, including credit card, PayPal, and GPay. The store was also multilingual supporting two languages x`– English and French. We were able to go live in just four weeks and provide complete functionalities covering over 500 products delivered with a very cost optimized Shopify monthly subscription plan.

          In parallel to building the storefront, the operations team simultaneously enabled

          • Adding new products to the online store
          • Configuring tax/discounts
          • Configuring customer support
          • Validating standard reports such as sales reports etc

          The merchant had a complicated tax calculation GST, PST, QST across 13 regions which were simplified by the out of the box country-specific tax configuration in Shopify.

          Feature Configuration and Customization Details

          • Customization of Shopify theme to make the store stand out and look great on web and mobile
          • Extended store functionalities such as translation, user review, product quick view and product pre-order using apps from Shopify Marketplace
          • Shopify’s own payment provider to accept credit card payments
          • Blog publishing through Shopify native blog features to help customers make informed decisions
          • Enabled multiple languages from Shopify admin and created separate URLs for translated content
          • Shopify Fulfilment Network offered a dedicated network of fulfilment centers that ensure timely deliveries, lower shipping costs, and a positive customer experience
          • Shipping suite provides tools to calculate real-time shipping rates, purchase and print shipping labels, and track shipments
          • Using Shopify built in tax engine to automatically handle most common sales tax calculations
          • Shopify native Notifications Module to automatically sent email or SMS to customers for confirmation of their order and shipping updates
          • With minimal effort we have configured Shopify Email to create email marketing campaigns and send them from Shopify
          • Over 500 products were imported in a matter of minutes using the product Import feature. More advanced features including associating multiple product images to product and meta data were out of the box
          • Advanced store navigation was configured using collections and tags which helped customers to easily discover products of their choice
          • Shopify’s analytics and reports provide means to review store’s recent activity, get insight into visitors, analyze online store speed, and analyze store’s transactions

          Solution Architecture
          Key components of Shopify platform are

          • Partner Dashboard: This provides capabilities including API credentials, track metrics for your published apps, create development stores, and access resources that help you to build your business
          • Shopify App CLI: Bootstrap a working Shopify app with Shopify command-line tool
          • Shopify App Generator for Rails: A Rails engine for building Shopify apps
          • App Bridge: JavaScript library to embed your app seamlessly in the Shopify admin
          • Shopify Admin API Library for Ruby: A handy software to simplify making Admin API calls in Ruby apps
          • Shopify Admin API Library for Python: A Python library to simplify making Admin API calls in Python apps
          • Shopify Admin API GraphiQL explorer: Interactive tool to build GraphQL queries using real Shopify API resources
          • Shopify Storefront API GraphiQL explorer: Interactive tool to build GraphQL queries for Shopify’s Storefront API
          • JavaScript Buy SDK: Add Shopify features to any website
          • Android Buy SDK: Add Shopify features to Android apps
          • iOS Buy SDK: Add Shopify features to iOS apps
          • Polaris: Create great user experiences for your apps with Shopify’s design system and component library

          Leveraging the above standard Shopify components, the solution was delivered with following storefront architecture.



          Relevance Lab Differentiator
          Relevance Lab empowers Digital Solutions covering e-commerce, Content, CRM and E-Business. Within e-commerce platforms there are deep specializations on Salesforce Commerce Cloud, Adobe Experience Manager, Shopify.

          With a complementary expertise in Cloud Infrastructure, Business Analytics and ERP Integration we help our customers achieve the necessary flexibility, scalability and cost optimization to adopt Cloud platforms covering SAAS, PAAS and IAAS. Based on the context of the business challenge, we provide an end to end perspective in identifying areas of friction and leveraging technology to address the same. In this case there was a quick recovery from Covid-19 induced disruptions and a solution was delivered at a fraction of regular costs with quick ROI achieved. The collaborative approach to deeply understanding customer business problems, ability to consult on multiple solutions and bring in deep expertise to enable the outcome is part of Relevance Lab unique capabilities.


          For more details on how we have help achieve frictionless digital business and leverage Cloud based platforms like Shopify for e-commerce feel free to contact marketing@relevancelab.com



          0

          2020 Blog, Blog, Cloud Blog, Featured

          Amazon WorkSpaces is a simple to use, cloud based, managed secure Desktop solution. It is a one click deployment product which is available on Windows and Linux operating systems. The main advantage of using Amazon WorkSpaces is as follows.

          • Easy to provision, Desktop as a Service (DaaS)
          • Provision, de-provision and lifecycle management using your existing ITSM (ServiceNow, Jira Service Desk or Freshservice)
          • Extend your existing On-Premise Desktops/Laptops with the AWS Workspaces and manage it centrally
          • Secured data with reliable, High Availability enabled Desktop solution
          • Cost effective and on-demand flexibility
          • Manage and scale up or scale down based on the business need in a centralized way
          • Accelerate deployment at scale

          Need for a secured and effective Cloud End User Computing Model

          Amazon WorkSpaces helps in adopting a secure, managed cloud-based virtual desktop model to fulfil your End User Computing (EUC) IT requirement needs. Also, it ensures Organizations move away from the pain of procurement, deployment, and management of a complex environment. The traditional method also has a challenge where the hardware and licenses can be scaled up with additional cost, in case of a need but cannot be scaled down and ends up with unwanted cost in case of seasonal spike. Amazon WorkSpaces help organizations scale up and scale down based on demand and deploy at scale with few click deployment models and with enhanced security of your cloud Desktop. Relevance Lab’s pre-baked solution helps your IT team who has minimal knowledge on AWS adopt DaaS solutions with usage of ITSM platforms or custom Cloud Portal.

          Best Practices of Network design for Amazon WorkSpaces


          VPC It is recommended to use a separate VPC for your WorkSpaces implementation. This helps us define the required governance and security guardrails by creating traffic separation.
          Directory Service Each AWS Directory service build requires a pair of subnets for high availability across Amazon availability zones.
          Subnet size Subnet sizes are permanent and cannot be modified and hence need to plan for future capacity. You can define a default security group to your directory services which implies it to all the WorkSpaces under this directory services. Additionally, you can have multiple directory services use the same subnet.
          Network Connectivity Whether you are looking for a pure cloud solution for your AWS WorkSpaces or planning to integrate with your existing on-prem setup, AWS helps achieve both using multiple options as below.
          Option 1 – Extend your existing directory to the AWS Cloud.
          Option 2 – Utilize your existing on-premises Microsoft Active Directory by using AWS directory Service, AD Connector.
          Option 3 – Integrate your on-premise server with AD Connector to provide multi-factor authentication (MFA) to your WorkSpaces.
          Option 4 -Create a managed directory with AWS Directory Service, Microsoft AD or Simple AD, to manage your users and WorkSpaces.

          Observability of AWS WorkSpaces

          This deals with managing lifecycle from creation, usage and termination in an optimal manner. This covers following three areas.

          1. Security and Governance
          2. As per AWS best practices, every individual user account should be set up with AWS IAM roles with right permissions and enable multi-factor authentication (MFA) with each account. Different WorkSpaces on the same physical host are isolated from each other through the hypervisor as though they are on separate physical hosts.

          3. Health Monitoring
          4. CloudWatch Metrics for WorkSpaces gives an insight to the overall health and connection status of all WorkSpaces. This can be per Desktop or aggregated for all WorkSpaces within a Directory. Apart from the default metrics, you can also enable additional metrics.

          5. Cost Optimization
          6. AWS WorkSpaces billing is based on usage and there are 2 options to choose by default.

            • AlwaysOn – This is the best option when you are a monthly billing mode, and your usage is typically around 6 to 9 hours a day.
            • AutoStop – This is the ideal option when you are on hourly billing. You can have the WorkSpaces stop after a specified time of inactivity which stops the billing.

          One of the best practices is to monitor the usage of the WorkSpaces running mode using Amazon WorkSpaces Cost Optimizer. This solution uses an Amazon CloudWatch event that invokes an AWS Lambda function every 24 hours. This can then convert your WorkSpaces to the most cost-effective model from the next billing cycle. (Hourly to Monthly or Monthly to Hourly) based on your usage pattern.



          Automation

          WorkSpaces provisioning can be automated using your existing ITSM platforms like ServiceNow, Jira, ServiceDesk or Freshservice. There are existing connectors like AWS Service Management Connector and RLCatalyst Service Management Connector providing end to end automation.


          AWS Products Used


          Relevance Lab is a specalist AWS partner for Desktop as a Service using AWS Workspaces. It has implemented Workspaces with its pre-integrated, secured and matured solutions for its clients using their existing ITSM tools. This has helped customers for a faster adoption of cloud and promoted the cost optimization journey. Relevance Lab’s DaaS solution offering starts with an assessment questionnaire that can help your organizations understand the need to migrate to a secured, scalable and matured solution. Based on the assessment scorecard, we recommend the right solution based on automation, security, governance and compliance model.

          This blog refers to the standard Desktop as a Service using AWS Workspaces. In more advanced scenario’s adoption of DaaS also involves additional steps like Storage, Log Monitoring, Security Analytics (SIEM, SOAR), Mail and Office suite options, Container Deployment and Application security signing which will be covered in a separate blog.


          For more details or for the assessment questionnaire please reach out to marketing@relevancelab.com



          0

          2020 Blog, Blog, Featured

          Relevance Lab in partnership with ServiceNow and AWS has launched a new solution (ServiceNow scoped application) to consume Intelligent Automation BOTs from within ServiceNow self-service Portal with 1-Click automation of assets and service requests using the Information Technology Service Management (ITSM) governance framework. This RLCatalyst BOTs Service Management (RLCatalyst BSM) connector is available for private preview and will very soon be also available on ServiceNow Marketplace. It integrates with ServiceNow self-service Portal and Service Catalog to dynamically publish an enterprise library of BOTs for achieving end to end automation across infrastructure, applications, service delivery and Workflows. This solution builds on the concept of “Automation Service Bus” architecture explained in a blog earlier.

          The biggest benefit of this solution is a transition to a “touchless” model for automation within ServiceNow Self Service Portal with a dynamic sync of enterprise automation libraries. It provides an ability to add new automation without a need to build custom forms or workflows inside ServiceNow. This makes creation, publishing and lifecycle management of BOTs automation within the existing governance models of ITSM and Cloud frictionless leading to faster rollout and ROI. Customers adopting this solution can optimize ServiceNow and Cloud operations costs significantly with self-service models. A typical large enterprise Service Desk team gets a huge volume of inbound tickets on a daily basis and more than 50% of these can be re-routed to self-service requests with a proper design of service catalog, automation and user training. With every ticket fulfilment cost (normally US $5-7) now handled by BOTs there is a significant and measurable ROI along with faster fulfilment, better user experience and system based compliance that helps in audits.

          Following are the key highlights of this solution

          • Rendering of RLCatalyst BOTs under ServiceNow Service Catalog for 1-Click order and automation with built in workflow approval models.
          • Ability of ServiceNow Self Service users to order any Automated Service Request from this standard catalog covering common workflows like.
            • Password Reset Requests.
            • User Onboarding.
            • User Offboarding.
            • AD/SSO/IDAM integration.
            • Access and Control for apps, tools, and data.
            • G-Suite/O365/Exchange Workflows.
            • Installation of new software.
            • Any standard service request made available by enterprise IT in a standard catalog.
          • Security and approvals integrated with existing ServiceNow and AD user profiles.
          • Ability to involve any BOT from the RLCatalyst BOTs server that provides integration to agent base, agent-less, Lambda function, scripts, API based, UI based automation functionality.
          • A pre-built library of 100+ BOTs provided as out-of-the-box solution.

          As a complementary solution to AWS Service Management connector customers can achieve complete automation for their Asset and Service Requests with Secure Governance. For assets being consumed on non AWS footprints like VMWare, Azure, On-prem systems, the solution supports automation with Terraform templates to address hybrid-cloud platforms.

          What are BOTs?
          Any Automation functionality dealing with common DevOps, TechOps, ServiceOps, SecurityOps and BusinessOps. BOTs follow an Intelligent Automation maturity model as explained in this blog earlier.

          • BOTs Intelligent Maturity Model
            • Task Automation.
            • Process Automation.
            • Decisioning Driven Automation.
            • AI/ML Based Automation.

          BOTs vs Traditional Automation

          • BOTs are reusable – separation of Data and Logic.
          • BOTs support multiple models – AWS Lambda Functions, Scripts, Agent/Agentless, UIBOTs etc with better coverage.
          • BOTs are managed in a Code repository with Config Management (Git Repo) – this allows the changes to be “Managed” vs “Unmanaged scripts”.
          • BOTs are wrapped in YAML Definitions and exposed as Service APIs – this allows BOTs to be involved from Third-Party Apps (like ServiceNow).
          • BOTs are “Managed & Supervised Runs” – BOT Orchestrator manages the lifecycle to bring in Security, Compliance, Error Handling and Insights.
          • BOTs have a Lifecycle for Intelligent Maturity.
          • Open Source Platform that can be extended and integrated with existing tools on a journey to achieve AIOps Maturity.
          • Very deeply embedded with ServiceNow and leverages data and transaction integration in a bi-directional way.

          The following image explains the RLCatalyst BOTs Service Management Architecture.

          How does RLCatalyst BOTs Service Management work?
          Integrating your ServiceNow instance with RLCatalyst BOTs Server helps you to publish self-service driven automation to your ServiceNow Service Portal without the need for custom coding or form design. Your users can order items from the Service Catalog which are then fulfilled by BOTs while maintaining record of the transactions in ServiceNow via Service Requests.

          The ServiceNow administrator first downloads the scoped application and installs it in her ServiceNow instance. The application can be deployed from the Github repository provided by Relevance Lab. In the near future, this application will also be available from the ServiceNow Application Store.

          Once installed, the application is configured by the ServiceNow Administrator. The person will fill the “BOTs Server Configuration” form. The required parameters are BOTs Server URL, Server Name, Is Default, Username and Password. This information is stored in the ServiceNow instance and is then used to discover and publish BOTs from the RLCatalyst BOTs Server.

          The application administrator clicks on the Discover BOTs screen to retrieve the list of latest BOTs available on the BOTs Server. Once this list is displayed, the administrator can choose the BOTs person wants to publish and select the kind of workflow person wants to associate with that BOT (none, single or multi-level approvals). Then person clicks on the Publish button on doing which the BOTs are published to the Service Portal along with all the Forms associated with the BOT for input.

          End-users can then use the self-service Catalog items to request fulfilment by BOTs.

          What is the standard library of RLCatalyst BOTs available with this solution?
          RLCatalyst provides a library of 100+ BOTs for common Service Management tickets and can help achieve up to 30-50% automation with out-of-the-box functionality across multiple functionalities as explained in diagram below.

          • User Onboarding and Offboarding.
          • Cloud Management.
          • DevOps.
          • Notification Services.
          • Asset Management.
          • Software and Applications Access Management.
          • Monitoring and Remediation.
          • Infrastructure Provisioning with integration to AWS Service Catalog.

          Summary of Solution benefits
          The RLCatalyst BOTs Service Management connector is providing an enterprise wide automation solution integrating ServiceNow to Hybrid Cloud assets with an ability to have self-service models. The automation of Asset and Service requests provides significant productivity gains for enterprises and in our own experience has resulted in achieving 10 FTE productivity, 70% automation of inbound requests and more than US $500K of annual savings on operations costs (including reduced headcount), ITSM license costs, Cloud assets optimized usage with compliance and 50% efficiency gains on internal IT Workflows.

          Following are some key blogs with details of solutions addressed with this RLCatalyst BSM connector.


          For more details, please feel free to reach out to marketing@relevancelab.com



          0

          2020 Blog, Blog, Feature Blog, Featured

          Relevance Lab in partnership with AWS has launched a new solution to help self-service collaboration for Scientific Computing using AWS Cloud resources. Scientific Research is enabling new innovations and discoveries in multiple fields to make human life better. There are the large and complex programs funded by governments, public sector and private organizations. Every higher education institution and universities globally have specialized focus on Research Programs.

          Some research institutions already use an existing ITSM Portal for self-service and our previous blog explains the solution integrated with such popular ITSM tools like ServiceNow – AWS Research Workbench. In this blog we cover the common scenario of research institutions for an open source based custom self-service platform that is needed to integrate a community within the institution and also with outside organizations in a federated manner.

          Why do we need an RLCatalyst Research Gateway cloud solution?
          Research is a specialized field with the community focussing on using “Science” to find common solutions to human problems in areas of Health and Medicine, Space, Earth etc. The need to drive frictionless research across geographies requires ability to focus on “Science” while addressing the specific needs of People-Programs-Resources interactions. The “RLCatalyst Research Gateway” acts as a bridge provisioning seamless and secure interactions, access to programs and budgets with ability to consume and manage lifecycle of research related computational and data resources.


          PEOPLE Specialized group of Researchers collaborating across organizations, disciplines and countries with open collaboration needs.
          PROGRAMS Specialized research programs, funding, grants, governance, reporting, publishing outcomes etc.
          RESOURCES High Performance Computing resources, large data for studies, analytics models, data security and privacy considerations, sharing and collaboration, Common Federated Identity and Access Management etc.

          The key requirements for Cloud based RLCatalyst Research Gateway are following.

          • Standard Research Needs
            • Roles, Workflows, Research Tools, Governance, Access and Security, Integration.
            • People-Programs-Resources Interactions.
            • Intramural and Extramural Research.
            • Infrastructure, Applications, Data, and Analytics.
          • Built on Cloud
            • Easy to deploy, consume, manage and extend – should align with existing infrastructure, applications, and cloud governance.
            • Leverage AWS Research products.
          • Leverage Open-Source with an enterprise support model
            • Supports both Self-hosting and Managed Hosting options.
            • Cost effective – pre-built IP and packaged service offerings.

          The diagram below explains the RLCatalyst Research Gateway cloud solution. The solution provides researchers with one-click access to collaborative computing environments operating across teams, research institutions, and datasets while enabling internal IT stakeholders to provide standard computing resources based on a Service Catalog, manage, monitor, and control spending, apply security best practices, and comply with corporate governance.

          Using the RLCatalyst Research Gateway cloud solution
          The basic setup models a research institution or university with need to have support for different research departments, principal investigators, researchers, project catalogs and budgets. The diagram below explains a typical setup of the key stakeholders and different entities inside the RLCatalyst Research Gateway.

          • Research Organization/Institution.
          • Research Departments.
          • Principal Investigators.
          • Researchers.
          • Site Administrator.
          • Project Catalog of Cloud Products.
          • Budget for Project and Researcher.

          RLCatalyst Research Gateway solution map
          There are three key role based functionality built into the RLCatalyst Research Gateway solution related to following.

          • Researcher Workflows.
          • Principal Investigator Workflows.
          • Site Administrator Workflows.

          The RLCatalyst Research Gateway solution components
          A number of AWS components have been used to build the RLCatalyst Research Gateway solution to make it easier for the research community to focus on science and not the headaches of managing cloud infrastructure. At the same time existing investments of Research Institutions on AWS are leveraged and best practices integrated without need for custom or proprietary solutions. Following is a sample list of AWS products used in RLCatalyst Research Gateway and more products can be easily integrated.

          • AWS Service Catalog – Core products available for Research Consumption.
            • AWS SageMaker Notebook.
            • AWS EC2 Instances.
            • AWS S3 Buckets.
            • AWS Workspaces.
            • AWS RDS Data Store.
            • AWS HPC high performance computing.
            • AWS EMR.
          • AWS Cognito for Access and Control.
          • AWS Control Tower for Account management and governance.
          • AWS Cost Explorer and Billing for Project and Researcher budget tracking.
          • AWS SNS and AWS Eventbridge for Notification Services.
          • AWS Cloudformation for template designs.
          • AWS Lambda for Serverless computing.

          The RLCatalyst Research Gateway solution created in partnership with AWS is available in an Open source model with Enterprise Support options. The solution can be deployed in Self-hosted Cloud or used in a Managed Hosting model with customizations options available as needed.

          For a demo video please click here

          For more details, please feel free to reach out to marketing@relevancelab.com



          0

          2020 Blog, Blog, Cloud Blog, Featured

          Based on AWS recommended best practices, this blog articulates governance and management at scale for customers on cloud security implementation covering the following themes

          • Designing Governance at Scale
          • Governance Automation
          • Preventive Controls
          • Detective Controls
          • Bringing it all together

          Need for a matured and effective Cloud Security Governance
          To achieve agility, compliance and security customers cannot rely on the manual processes and hence automation plays a key role. This mandates the need for an integrated model called “Governance at Scale” which focuses on Account Management, Security, Compliance Automation, Budget and Cost Management. This model help customers to be on fast track, while ensuring the workloads meet security and compliance requirements. Governance at Scale is an orchestration framework which includes enablement, provisioning and operations.


          • Account Management: Governance at Scale processes streamline account management across multiple AWS accounts and workloads in an organization through centralization, standardization and automation of account maintenance. This can be achieved through policy automation, identity federation and account automation.

          • Security and Compliance Automation: Governance at Scale practices consists of three main goals
            • Identity and Access Automation: Customers can access their workloads based on their roles privileges, as defined by the organizations policies. Access to new services can be added to an OU level and the changes will apply across all cloud accounts on that level.
            • Security Automation: To maintain a secure position at scale, security tasks and compliance assessments also require automation. Automation helps in reduced implementation efforts, as templates ensure that services and projects are secure and compliant by default. Customers can also be more responsive when a policy violation occurs.
            • Policy Enforcement: AWS guidance to achieve Governance at Scale helps you to achieve policy enforcement on AWS Regions, AWS services and resource configurations. Policies enforcement happens at different levels like Region, services and resource configurations and also at an organizational level or the resource level. Enforcement is based on roles, responsibilities and compliance regulations (such as HIPAA, FedRAMP and PCI/DSS).

          • Budget and Cost Management: This framework helps Organizations to proactively make decisions on budget controls and allocation across their organizations and primarily consists of budget planning and enforcement.
            • Budget Planning: This allows allocation and subdivide the available budget from a given funding source appropriately across the company by the financial owners. Financial dashboards provide real-time insights to the decision makers over the lifetime of the funding source.
            • Budget Enforcement: Budget enforcement can happen at each layer, department or project in an organization as these can have different budgetary needs and limits. The governance framework allows the organization for budget assignment and defines the threshold, while monitoring spending in real time and can proactively notify the relevant stakeholders and trigger enforcement actions.

          Some of this Intelligent Automation includes

          • Restricting the use of AWS resources to those that cost less than a specified price.
          • Throttle new resource provisioning.
          • Shut down, end or deprovision AWS resources after archiving configurations and data for future use.

          Implementing Governance at Scale with Ideal Landing Zone architecture


          Key Process and Services to implement Governance at Scale Framework

          AWS Control Tower: It is a native service used for setup and governing a secure, compliant, multi-account AWS environment, automated using AWS best practices blueprints. It’s multi-account structure enables aggregated centralised logging, monitoring and operations.

          • Establish and Enable Guardrails: AWS Control Tower includes guardrails, which are high-level policies that provide constant governance. It allows you to adopt original best practices on security across the AWS environment managed by Control Tower.
          • Automate Compliant Account Provisioning: Automate account provision workflow using Account Factory.
          • Centralize Identity and Access: By using AWS SSO, the service can centralize access and identity management which follows the standard best practices.
          • Log Archive Account: The log archive centralizes logs and provides a single source of truth for all the account activities. The account works as a repository for API activity logs and resource configurations from all accounts in the landing zone. It contains the centralized logging for AWS CloudTrail and AWS Config.
          • Audit Account: The audit account is a restricted account. It is designed to provide security and compliance teams read and write access to all accounts in your landing zone. It can be a main account for security services such as Amazon GuardDuty and AWS Security Hub.

          Governance Lifecycle with Services: An integrated model covering AWS Config, AWS Systems Manager, Amazon GuardDuty and AWS Security Hub.

          These services work together and play a crucial role in the Governance at Scale framework. Together, they allow your customers to

          • Define security rules and compliance requirements.
          • Monitor infrastructure against the rules and requirements.
          • Detect violations.
          • Get notifications in real time.
          • Take action in an effective and rapid manner.

          AWS Config: This enables customers to assess, audit and evaluate their AWS configurations in real-time. It also monitors and records AWS resource configurations. It also automates the evaluation of recorded configurations against desired configurations.

          AWS Systems Manager: This gives customers visibility with a unified user interface and allows them to control their infrastructure on AWS by automating operational tasks. With AWS Systems Manager, customers can

          • Group resources by application.
          • View operational data for monitoring and troubleshooting and take action on groups of resources.
          • Streamlines resource and application management.
          • Shortens the time to detect and resolve operational issues.
          • Simplifies operations and management of the infrastructure – securely at scale.

          Amazon GuardDuty: It protects AWS accounts, workloads and data with intelligent-threat detection, monitoring of malicious activity, unauthorized behavior to protect AWS accounts and the workloads. It uses machine learning, anomaly detection and integrated threat intelligence to identify and prioritize potential threats.
          Customers enable GuardDuty from the AWS Management Console, where it analyzes billions of events across multiple AWS data sources, such as AWS CloudTrail Event logs, Amazon VPC flow log and DNS logs. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable.

          AWS Security Hub: This is the compliance and security center for AWS customers. Security Hub allows customers to centrally view and manage security alerts and automate security checks.
          Security Hub automatically runs the account-level configuration and security checks based on AWS best practices and open standards. It consolidates the security findings across accounts and provider products and displays results on the Security Hub console. It also supports integration with Amazon CloudWatch Events. To automate remediation of specific findings, customers can define custom actions to take when a finding is received.


          AWS Products Used


          With AWS management and governance services, customers can improve their governance control and fast track their business objectives. However, solving these challenges are not straight and simple as many of the customers rely on a traditional IT management process which is manual and not scalable. Also, with lack of clarity on account management without clearly defined processes, they end up with multiple accounts provisioning and tracking becomes inefficient. This can also increase their security and financial risks. In some cases, due to these challenges, customers rely on third party tools or solutions which can further complicate and increase operational challenges.

          Relevance Lab can help organizations to build or migrate existing accounts to a secured, compliant, multi account AWS environment enabled with automation to increase both operational and cost efficiency. The transition to this matured Governance at Scale framework can be implemented in four weeks using our specialised competencies, RLCatalyst automation framework and the Governance at Scale handbook.

          For more details, please feel free to reach out to marketing@relevancelab.com



          0

          2020 Blog, Analytics, Blog, Featured

          Nobody likes remembering credentials. They appear like exerting plenty of pressure on the memory. What is worse is many use identical username and password, no matter the application they are using. Single Sign-On (SSO) could be a method of authentication that permits websites to use other trustworthy sites to verify users. Single Sign-On allows a user to log into any independent application with one ID and password. Verification of user identity is very important when it involves knowing which permissions a user will have. OKTA is a leading IDAM application that our client uses for managing access that blends user identity management solutions with SSO solutions. SPECTRA, an analytical platform which is supported by open source technology has recently been on boarded for the client who is into publishing space. The client has integrated all their applications under one roof of IDAM (OKTA). SPECTRA also follows the same route.

          What is SPECTRA?
          SPECTRA is a Big Data Analytics platform from Relevance Lab, which has the ability to consume, store and process structured and unstructured data. It also can cleanse and integrate this data into one unique platform. It depicts data intelligently and presents it using an intuitive visualization layer so that business users can get actionable business insights across various parameters. Coupled with an OCR engine, it also provides Google-like search capabilities across legacy unstructured and structured data.


          SAML
          In the modern era of computing, security is an essential feature when it comes to enterprise applications. Security Assertion Markup Language (SAML) is used to provide a single point of authentication at a secure identity provider. This feature highlights that user credentials could not leave the firewall boundary. SAML is used to assert the identity to others.

          SAML SSO works by transferring the user’s identity from one place (OKTA) to another service provider(SPECTRA). The application identifies the user’s origin (By First Name, Last Name & Network Email ID) and redirects the user back to the identity provider (OKTA), asking for authentication to enter the IdP registered credentials.

          See the high level architectural diagram below.


          Integrating with OKTA Idam Platform using SAML
          Identity Provider (IdP) is an entity that provides the identities, including the flexibility to authenticate a user-agent. The Identity Provider also contains the additional user profile information like name, last name, job code, signal, address, and so on. Several service providers may require a simple user profile, while others may require a complex set of user data (job code, department, address, location, manager, etc).

          See the diagram below which show Spectra and SAML Integration.


          SAML Request, also referred to as an authentication request, is generated by the SPECTRA (Service Provider) to “request” an authentication through IdP to User-Agent. SAML Response is generated by the Identity Provider. It contains the accurate assertion of the authenticated user. Additionally, a SAML Response also contains additional information, like user profile information and group/role information, betting on what the Service Provider can support.

          See the picture below which shows SAML Integration flow.


          SPECTRA platform initiates sign-in describes the SAML sign-in flow when initiated by the Service Provider. This is triggered when the end-user tries to access a resource or log-in directly on the Service Provider side, like when the user-agent (browser) tries to access a protected resource on the Service Provider side.

          An Identity Provider (Idp) initiates sign-in depicts the SAML sign-in request created by the Identity Provider. The Idp initiates a SAML Response that is redirected to the Service Provider to confirm the user’s identity, rather than the SAML flow being triggered by a redirection from the SPECTRA. The Service Provider not once directly interacts with the Identity Provider. User-Agent (browser) functions as the agent to carry out all the redirections. The Service Provider must know which Idp to pass on to the MySQL database. The Service Provider must authenticate the user until the SAML assertion comes back from the Idp.

          An Identity Provider can initiate an authentication flow. The SAML authentication flow is asynchronous. The Service Provider interacts with Idp and redirects the request to the complete flow. This creates a situation where the Service Provider will not maintain any state of authentication requests. The response that Service Provider gets from an Identity Provider must contain all the required information. SPECTRA validate the OKTA user information in MySQL DB and transfer the assigned user roles in the application. User can view the assigned roles within the application.

          SPECTRA, a product from Relevance Lab offers great flexibility as an analytical platform that has ability to consume, store and process structured and unstructured data. It can be integrated with various Identity Access Management platforms like OneLogin, AuthO, Ping Identity, etc using SAML.

          For more details, please feel free to reach out to marketing@relevancelab.com



          0

          PREVIOUS POSTSPage 1 of 3NO NEW POSTS