Your address will show here +12 34 56 78
2021 Blog, AppInsights Blog, AWS Governance, Blog, Featured, thank you

Governance360 is an integrated and automated solution using the Control Tower Customization methodology. The solution is focussed on the entire lifecycle of a customer cloud adoption covering the following stages:


  • Workload planning for Cloud Migration and associated best practices with automation.
  • Multi-account management with secure and compliant AWS Accounts, Cost tracking against budgets, guardrails to ensure the workloads are deployed as per AWS Well Architected best practices. This component is called “Control Services” and provides preventive and corrective guardrails.
  • The workloads consisting of network, IDAM, compute, data, storage, applications need to be secure and monitored for static and dynamic threats and vulnerabilities covered under Security Management. This ensures proactive detection and correction of security threats.
  • Proactive monitoring enables observability across system, application, logs management with integrated alert aggregation, correlation and diagnostics to detection performance and availability issues.
  • Service Management and Asset Management integrates the Cloud management workflows with ITSM tools based on enterprise standards and enables self-service portals and active CMDB tracking.
  • Foundation of Automation-First approach with workflows, templates and BOTs provides a scalable enterprise grade framework of achieving better, faster, cheaper adoption of Cloud and ongoing cloud managed services leveraging RLCatalyst BOTs Server.

All the above components are complex systems that need integration and data sharing with active policies, status monitoring and workflows for suitable interventions to achieve a holistic Governance360 model. The solution ensures that proper policies and governance models are set up upfront and consistently updated, as life cycle changes are needed. It combines AWS Control Tower and other highly-available, trusted AWS services and Relevance Lab Automated solutions to help customers quickly set up a secure, multi-account AWS environment using AWS best practices. Through customization, this solution can integrate with AWS Control Tower lifecycle events to ensure the resource deployment stays in sync with the landing zone. In a single pane, get visibility on the organizational tree structure of your AWS accounts along with compliance status and non-compliance findings.

The diagram below explains the core building blocks of the Governance360 Solution.


Why do Enterprises need Governance360?
For most Enterprises, the major challenge is around governance and compliance and lack of visibility into their Cloud Infrastructure. They spend enormous time trying to achieve compliance in a silo manner. Enterprises also spend enormous amounts of time and effort on security and compliance. This can be addressed by automating compliance monitoring, increasing visibility across the cloud with the right set of tools and solutions. Our solution addresses the need of Enterprises on the automation of these security & compliance. By a combination of automated preventive, detective, and responsive controls, we help enterprises by enforcing nearly continuous compliance and auto-remediation and there-by increase the overall security and reduce the compliance cost.

Some of the use cases on why Enterprises would adopt Governance360:

  • Centralized Cloud Operations Management
  • Configuration, Compliance and Audit Management
  • Automated proactive monitoring and Observability of your Applications
  • Self-Service Provision and Deprovision of Cloud resources
  • Cloud Financial Management

As shown in the above diagram, Governance360 uses a set of tools and policies across multiple layers. This solution starts with a deployment of AWS Control Tower, post which an AWS CloudFormation template you deploy in the account where AWS Control Tower landing zone is deployed. The template launches an AWS CodePipeline, AWS CodeBuild projects, AWS Step Functions, AWS Lambda functions, an Amazon EventBridge event rule, an AWS Simple Queue Service (Amazon SQS) queue, and an Amazon Simple Storage Service (Amazon S3) bucket which contains a sample configuration package. The solution can also create an AWS CodeCommit repository to contain the sample configuration package, instead of the Amazon S3 bucket.

Once the solution is deployed, the custom resources are packaged and uploaded to the CodePipeline source using Amazon S3, and triggers the service control policies (SCPs) state machine and the AWS CloudFormation StackSets state machine to deploy the SCPs at the organizational units (OUs) level or stack instances at the OU and/or account level. Also, integration with Security Hub ensures all of your accounts and resources are being continuously monitored for Continuous Compliance.


Our standard and the custom library includes a set of pre-built templates (Cloud Formation and Terraform) and policies (YAML/JSON). This could be a combination of CFTs for deployment or provision and policies to enforce, monitor the governance and compliances. This can help automated deployment with one-click for your Network, Infrastructure, and Application Layer and enforce pre-defined compliance on your account.

Governance360 Maturity Model
Governance360 maturity model consists of 4 levels as shown below:


    Level-1 (Basic Governance)
  • Covers AWS Control Tower
  • Takes about 4-6 weeks
          • What is AWS Control Tower?
          • Secure.
          • Compliant.
          • Multi-Account AWS Environments.
          • Based on AWS Best Practices.
          • How does it work? Step-1
          • Multi-Account Structure.
          • Identity and Access Management.
          • Account Provisioning Workflows.

          • Step-2
          • Apply Guardrails – Security and Compliance Policies.
          • Prevents non-compliance during new deployments.
          • Detects and Remediate non-compliances found on Accounts and Resources.

          • Step-3
          • Monitors Compliance with Visual Summaries.
          • Provides Dashboard for Accounts, Guardrails and Compliance status all in one place.
          • What benefits does it provide?
          • Automated & Standardized Account Provisioning.
          • Get better control of AWS environments.
          • Govern your workloads more easily and Drive Innovation.
          • Cost and Budget Management.
          • What is still missing in maturity at this Level?
          • A manual setup model where making changes to all different OUs and Accounts is not automated to deploying new policies and customization is not easy.
          • Setup of VPC/Subnet/IAM roles needs more advanced templates and automation.
          • Only mandatory guard-rails are activated and still need more work for getting all AWS Foundation and CIS Top 20 Benchmark compliance.
          • Cost Optimization missing.
          • Integration with ITSM Tools missing.


            • Level-2 (Advanced Governance)
            • Automation led Governance@Scale
            • Covers AWS Service Management Connector and ITSM Integrations
            • Additional 6-8 weeks
                    • What is Governance@Scale?
                    • Use Customization of Control Tower using CI/CD Pipeline Best Practices.
                    • Rich library of Automation Templates for Infra Automation.
                    • Get extended compliance to AWS Foundation and CIS Top-20.
                    • Cost Optimization Techniques – Instance Scheduler, Compute Optimizer, AWS Workspaces Cost Optimizer, Cost monitor lambda functions.
                    • Activate AWS Service Catalog, AWS Service Management Connector.
                    • How does it work?
                    • Deployment of Customization of Control Tower and Custom Guardrails.
                    • Enablement of Security Hub, Config
                    • Service Catalog and Service Management capabilities using your ITSM platform (ServiceNow, Jira SD, Freshservice).
                    • What benefits does it provide?
                    • Ease of deployment of security controls @ Scale using CI/CD pipeline.
                    • Dashboard of Security Hub.
                    • Dashboard for Asset Management.
                    • Dashboard of AWS Config Aggregator.
                    • What is still missing in maturity at this Level?
                    • No integration with Security monitoring of resources and accounts – Static or Dynamic.
                    • Proactive Monitoring of Health of Assets is missing.


                      • Level-3 (Proactive and Preventive Governance)
                      • Covers AWS Security Hub and AWS Monitoring tools integration
                      • Provides Proactive and integrated monitoring of real time security and health parameters for appropriate early warning systems and actions. This can help early detection of adverse events, diagnosis and action
                      • Additional 8-10 weeks
                              • What is Proactive and Preventive Governance?
                              • Use the ITSM/Custom Cloud Portal to look at the compliance status across your multi-account cloud Infrastructure.
                              • Get a single pane of glass view for your multi-account cloud assets.
                              • Enable SSM to run periodic vulnerability assessments on your resources.
                              • How does it work?
                              • Integration of AWS Security Hub with AWS Control Tower.
                              • Use of GuardDuty and Inspector.
                              • Enable CloudWatch.
                              • What benefits does it provide?
                              • Dashboard of Security Hub.
                              • Dashboard of Proactive Health Monitoring.
                              • Dashboard of Vulnerability and Missing Patches.
                              • What is still missing in maturity at this Level?
                              • Granular policies for Account and Resource level control are missing.
                              • Continuous Compliance and Remediation is missing.
                              • Vulnerability and Patch Management fix is missing.
                              • Industry Specific extensions for specialized compliances – HITRUST, HIPAA, GRC, GDPR etc.


                                • Level-4 (Intelligent Compliance with Remeditions)
                                • Covers Cloud Custodian and Intelligent Automation with BOTs and Policies
                                • Helps achieve Continuous Compliance
                                • Helps achieve Industry-Specific Security Standards (Depends on the type of compliance.)
                                • Typically, 4-6 weeks per compliance standards
                                      • What is Intelligent and Continuous Compliance with Industry Specific Coverage?
                                      • Continuous monitoring, detection and auto-remediations achieved as scale.
                                      • Ability to learn from previous incidents and increase coverage & compliance.
                                      • Enterprise grade Automation covering full-lifecycle of cloud resources, system changes and people interactions.
                                      • Baseline the requirements for the Industry specific compliance needs like HITRUST, HIPAA, GDPR, SOC2 etc.
                                      • Deploy Quick Starts for these specific standards.
                                      • How does it work?
                                      • Integration with RLCatalyst BOTs Server and Command Centre.
                                      • Application and Business Service level Monitoring and Diagnosis.
                                      • Integration with Cloud Custodian.
                                      • Launch Compliance Standard Specific Quick Starts.
                                      • Enable AWS Systems Manager (or Manage Engine) and patch management.
                                      • What benefits does it provide?
                                      • Continuous Compliance Dashboard – Custodian + Security Hub.
                                      • Dashboard of Vulnerability – Compliance Status.
                                      • Command Centre Dashboards.

                                      • How to get started
                                        Relevance Lab is a consulting partner of AWS and helps organizations achieve automation led Cloud Management using Governance360, based on the best practices of AWS. While Enterprises can try and build some of these solutions, it is a time-consuming activity and error-prone and needs a specialist partner. Relevance Lab has helped 10+ Enterprises on this need and has a reusable automated solution and pre-built library to meet the security and compliance needs.

                                        For more details, please feel free to reach out to marketing@relevancelab.com.

                                        References
                                        Reference Architecture for HITRUST on AWS
                                        Customizations for AWS Control Tower
                                        AWS Control Tower and Cloud Custodian
                                        Deploy and Govern at Scale with AWS Control Tower
                                        Relevance Lab solution for Compliance as a Code



                                        0

                                        2021 Blog, Blog, Featured

                                        Major advances are happening with the leverage of Cloud Technologies and large Open Data sets in the areas of Healthcare informatics that includes sub-disciplines like Bioinformatics and Clinical Informatics; rapidly being adopted by Life Sciences and Healthcare institutions in commercial and public sector space. This domain has deep investments in scientific research and data analytics focusing on information, computation needs, and data acquisition techniques to optimize the acquisition, storage, retrieval, obfuscation, and secure use of information in health and biomedicine for evidence-based medicine and disease management.

                                        In recent years, genomics and genetic data have emerged as an innovative area of research that could potentially transform healthcare. The emerging trends are for personalized medicine, or precision medicine leveraging genomics. Early diagnosis of a disease can significantly increase the chances of successful treatment, and genomics can detect disease long before symptoms present themselves. Many diseases, including cancers, are caused by alterations in our genes. Genomics can identify these alterations and search for them using an ever-growing number of genetic tests.

                                        With AWS, genomics customers can dedicate more time and resources to science, speeding time to insights, achieving breakthrough research faster, and bringing life-saving products to market. AWS enables customers to innovate by making genomics data more accessible and useful. AWS delivers the breadth and depth of services to reduce the time between sequencing and interpretation, with secure and frictionless collaboration capabilities across multi-modal datasets. Plus, you can choose the right tool for the job to get the best cost and performance at a global scale— accelerating the modern study of genomics.

                                        Relevance Lab Research@Scale Architecture Blueprint
                                        Working closely with AWS Healthcare and Clinical Informatics teams, Relevance Lab is bringing a scalable, secure, and compliant solution for enterprises to pursue Research@Scale on Cloud for intramural and extramural needs. The diagram below shows the architecture blueprint for Research@Scale. The solution offered on the AWS platform covers technology, solutions, and integrated services to help large enterprises manage research across global locations.


                                        Leveraging AWS Biotech Blueprint with our RLCatalyst Research Gateway
                                        Use case with AWS Biotech Blueprint that provides a Core template for deploying a preclinical, cloud-based research infrastructure and optional informatics software on AWS.

                                        This Quick Start sets up the following:

                                        • A highly available architecture that spans two availability zones
                                        • A preclinical virtual private cloud (VPC) configured with public and private subnets according to AWS best practices to provide you with your own virtual network on AWS. This is where informatics and research applications will run
                                        • A management VPC configured with public and private subnets to support the future addition of IT-centric workloads such as active directory, security appliances, and virtual desktop interfaces
                                        • Redundant, managed NAT gateways to allow outbound internet access for resources in the private subnets
                                        • Certificate-based virtual private network (VPN) services through the use of AWS Client VPN endpoints
                                        • Private, split-horizon Domain Name System (DNS) with Amazon Route 53
                                        • Best-practice AWS Identity and Access Management (IAM) groups and policies based on the separation of duties, designed to follow the U.S. National Institute of Standards and Technology (NIST) guidelines
                                        • A set of automated checks and alerts to notify you when AWS Config detects insecure configurations
                                        • Account-level logging, audit, and storage mechanisms are designed to follow NIST guidelines
                                        • A secure way to remotely join the preclinical VPC network by using the AWS Client VPN endpoint
                                        • A prepopulated set of AWS Systems Manager Parameter Store key/value pairs for common resource IDs
                                        • (Optional) An AWS Service Catalog portfolio of common informatics software that can be easily deployed into your preclinical VPC

                                        Using the Quickstart templates, the products were added to AWS Service Catalog and imported into RLCatalyst Research Gateway.



                                        Using the standard products, the Nextflow Workflow Orchestration engine was launched for Genomics pipeline analysis. Nextflow helps to create and orchestrate analysis workflows and AWS Batch to run the workflow processes.

                                        Nextflow is an open-source workflow framework and domain-specific language (DSL) for Linux, developed by the Comparative Bioinformatics group at the Barcelona Centre for Genomic Regulation (CRG). The tool enables you to create complex, data-intensive workflow pipeline scripts and simplifies the implementation and deployment of genomics analysis workflows in the cloud.

                                        This Quick Start sets up the following environment in a preclinical VPC:

                                        • In the public subnet, an optional Jupyter notebook in Amazon SageMaker is integrated with an AWS Batch environment.
                                        • In the private application subnets, an AWS Batch compute environment for managing Nextflow job definitions and queues and for running Nextflow jobs. AWS Batch containers have Nextflow installed and configured in an Auto Scaling group.
                                        • Because there are no databases required for Nextflow, this Quick Start does not deploy anything into the private database (DB) subnets created by the Biotech Blueprint core Quick Start.
                                        • An Amazon Simple Storage Service (Amazon S3) bucket to store your Nextflow workflow scripts, input and output files, and working directory.

                                        RStudio for Scientific Research
                                        RStudio is a popular IDE, licensed either commercially or under AGPLv3, for working with R. RStudio is available in a desktop version or a server version that allows you to access R via a web browser.

                                        After you’ve analyzed your results, you may want to visualize them. Shiny is a great R package, licensed either commercially or under AGPLv3, that you can use to create interactive dashboards. Shiny provides a web application framework for R. It turns your analyses into interactive web applications; no HTML, CSS, or JavaScript knowledge is required. Shiny Server can deliver your R visualization to your customers via a web browser and execute R functions, including database queries, in the background.

                                        RStudio is provided as a standard catalog item in RLCatalyst Research Gateway for 1-Click deployment and use. AWS provides a number of tools like AWS Athena, AWG Glue, and others to connect to datasets for research analysis.

                                        Benefits of using AWS for Clinical Informatics

                                        • Data transfer and storage
                                        • The volume of genomics data poses challenges for transferring it from sequencers in a quick and controlled fashion, then finding storage resources that can accommodate the scale and performance at a price that is not cost-prohibitive. AWS enables researchers to manage large-scale data that has outpaced the capacity of on-premises infrastructure. By transferring data to the AWS Cloud, organizations can take advantage of high-throughput data ingestion, cost-effective storage options, secure access, and efficient searching to propel genomics research forward.

                                        • Workflow automation for secondary analysis
                                        • Genomics organizations can struggle with tracking the origins of data when performing secondary analyses and running reproducible and scalable workflows while minimizing IT overhead. AWS offers services for scalable, cost-effective data analysis and simplified orchestration for running and automating parallelizable workflows. Options for automating workflows enable reproducible research or clinical applications, while AWS native, partner (NVIDIA and DRAGEN), and open source solutions (Cromwell and Nextflow) provide flexible options for workflow orchestrators to help scale data analysis.

                                        • Data aggregation and governance
                                        • Successful genomics research and interpretation often depend on multiple, diverse, multi-modal datasets from large populations. AWS enables organizations to harmonize multi-omic datasets and govern robust data access controls and permissions across a global infrastructure to maintain data integrity as research involves more collaborators and stakeholders. AWS simplifies the ability to store, query, and analyze genomics data, and link with clinical information.

                                        • Interpretation and deep learning for tertiary analysis
                                        • Analysis requires integrated multi-modal datasets and knowledge bases, intensive computational power, big data analytics, and machine learning at scale, which, historically can take weeks or months, delaying time to insights. AWS accelerates the analysis of big genomics data by leveraging machine learning and high-performance computing. With AWS, researchers have access to greater computing efficiencies at scale, reproducible data processing, data integration capabilities to pull in multi-modal datasets, and public data for clinical annotation—all within a compliance-ready environment.

                                        • Clinical applications
                                        • Several hindrances impede the scale and adoption of genomics for clinical applications that include the speed of analysis, managing protected health information (PHI), and providing reproducible and interpretable results. By leveraging the capabilities of the AWS Cloud, organizations can establish a differentiated capability in genomics to advance their applications in precision medicine and patient practice. AWS services enable the use of genomics in the clinic by providing the data capture, compute, and storage capabilities needed to empower the modernized clinical lab to decrease the time to results, all while adhering to the most stringent patient privacy regulations.

                                        • Open datasets
                                        • As more life science researchers move to the cloud and develop cloud-native workflows, they bring reference datasets with them, often in their own personal buckets, leading to duplication, silos, and poor version documentation of commonly used datasets. The AWS Open Data Program (ODP) helps democratize data access by making it readily available in Amazon S3, providing the research community with a single documented source of truth. This increases study reproducibility, stimulates community collaboration, and reduces data duplication. The ODP also covers the cost of Amazon S3 storage, egress, and cross-region transfer for accepted datasets.

                                        • Cost optimization
                                        • Researchers utilize massive genomics datasets that require large-scale storage options and powerful computational processing, which can be cost-prohibitive. AWS presents cost-saving opportunities for genomics researchers across the data lifecycle—from storage to interpretation. AWS infrastructure and data services enable organizations to save time, money and devote more resources to science.

                                        Summary
                                        Relevance Lab is a specialist AWS partner working closely in Health Informatics and Genomics solutions leveraging AWS existing solutions and complementing it with its Self-Service Cloud Portal solutions, automation, and governance best practices.

                                        To know more about how we can help standardize, scale, and speed up Scientific Research in Cloud, feel free to contact us at marketing@relevancelab.com.

                                        References
                                        AWS Whitepaper on Genomics Data Transfer, Analytics and Machine Learning
                                        Genomics Workflows on AWS
                                        HPC on AWS Video – Running Genomics Workflows with Nextflow
                                        Workflow Orchestration with Nextflow on AWS Cloud
                                        Biotech Blueprint on AWS Cloud
                                        Running R on AWS
                                        Advanced Bioinformatics Workshop



                                        0

                                        2021 Blog, Blog, Featured

                                        AWS Marketplace is a high-potential delivery mechanism for the delivery of software and professional services. The main benefit for customers is that they get a single bill from AWS for all their Infrastructure and Software consumption. Also, since AWS is already on the approved vendor list for many enterprises, it makes it easier for enterprises to consume software also from the same vendor.

                                        Relevance Lab has always considered AWS Marketplace as one of the important channels for the distribution of its software products. In 2020 we had listed our RLCatalyst 4.3.2 BOTs Server product on the AWS Marketplace as an AMI-based product that a customer could download and run in their AWS account. This year, RLCatalyst Research Gateway was listed on the AWS Marketplace as a Software as a Service (SaaS) product.

                                        This blog details some of the steps that a customer needs to go through to consume this product from the AWS Marketplace.


                                        Step 1: The first step for a customer looking to find the product is to log in to their account and visit the AWS Marketplace. Then search for RLCatalyst Research Gateway. This will show the Research Gateway product at the top of the list in the results. Click on the link and this should lead to the details page.

                                        The product details page lists the important details like.

                                        • Pricing information
                                        • Support information
                                        • Set up instructions

                                        Step-2: The second step for the user is to subscribe to the product by clicking on the “Continue to Subscribe” button. This step will need the user to login into their AWS account (if not done earlier). The page which comes up will show the contract options that the user can choose. RLCatalyst Research Gateway (SaaS) offers three tiers for subscription.

                                        • Small tier (1-10 users)
                                        • Medium tier (11-25 users)
                                        • Large tier (unlimited users)

                                        Also, the customer has the option of choosing a monthly contract or an annual contract. The monthly contract is good for customers who want to try the product or for those customers who would like a budget outflow that is spread over the year rather than a lump sum. The annual contract is good for customers who are already committed to using the product in the long term. An annual contract gets the customer an additional discount over the monthly price.

                                        The customer also has to choose whether they want to contract to renew automatically or not.

                                        One of the great features of AWS Marketplace is that the customer can modify the contract at any time and upgrade to a higher plan (e.g. Small tier to Medium or Large tier). The customer can also modify the contract to opt for auto-renewal at any time.

                                        Step-3: The third step for the user is to click on the “Subscribe” button after choosing their contract options. This leads the user to the registration page where they can set up their RLCatalyst Research Gateway account.



                                        This screen is meant for the Administrator persona to enter the details for the organization. Once the user enters the details, agrees to the End User License Agreement (EULA), and clicks on the Sign-up button, the process for provisioning the account is set in motion. The user should get an acknowledgment email within 12 hours and an email verification email within 24 hours.

                                        Step-4: The user should verify their email account by clicking on the verification link in the email they receive from RLCatalyst Research Gateway.

                                        Step-5: Finally, the user will get a “Welcome” email with the details of their account including the custom URL for logging into his RLCatalyst Research Gateway account. The user is now ready to login into the portal. On logging in to the portal the user will see a Welcome screen.


                                        Step-6: The user can now set up their first Organizational Unit in the RLCatalyst Research Gateway portal by following these steps.

                                        6.1 Navigate to settings from the menu at the top right.


                                        6.2 Click on the “Add New” button to add an AWS account.


                                        6.3 Enter the details of the AWS account.


                                        Note that the account name given in this screen is any name that will help the Administrator to remember which OU and project this account is meant for.

                                        6.4 The Administrator can repeat the procedure to add more than one project (consumption) account.

                                        Step-7: Next the Administrator needs to add Principal Investigator users to the account. For this, he should contact the support team either by email (rlc.support@relevancelab.com) or by visiting the support portal (https://serviceone.relevancelab.com).

                                        Step-8: The final step to set up an OU is to click on the “Add New” button on the Organizations page.


                                        8.1 The Administrator should give a friendly name to the Organization in the “Organization Name” field. Then he should choose all the Accounts that will be consumed by projects in this account. A friendly description should be entered in the “Organization Description” field. Finally, choose a Principal Investigator who will manage/own this Organization Unit. Click “Add Organization” to add this OU.


                                        Summary
                                        As you can see above, ordering RLCatalyst Research Gateway (SaaS) from the AWS Marketplace makes it extremely easy for the user to get started, and end-users can start using the product within no time. Given the SaaS model, the customer does not need to worry about setting up the software in their account. At the same time, using their AWS account for the projects gives them complete transparency into the budget consumption.
                                        In our next blog, we will provide step by step details of adding organizational units, projects & users to complete the next part of setup.

                                        To learn more about AWS Marketplace installation click here.

                                        If you want to learn more about the product or book a live demo, feel free to contact marketing@relevancelab.com.



                                        0

                                        2021 Blog, Blog, Featured

                                        Working on non-scientific tasks such as setting up instances, installing software libraries, making model compile, and preparing input data are some of the biggest pain points for atmospheric scientists or any scientist for that matter. It’s challenging for scientists as it requires them to have strong technical skills deviating them from their core areas of analysis & research data compilation. Further adding to this, some of these tasks require high-performance computation, complicated software, and large data. Lastly, researchers need a real-time view of their actual spending as research projects are often budget-bound. Relevance Lab help researchers “focus on science and not servers” in partnership with AWS leveraging the RLCatalyst Research Gateway (RG) product.

                                        Why RLCatalyst Research Gateway?
                                        Speeding up scientific research using AWS cloud is a growing trend towards achieving “Research as a Service”. However, the adoption of AWS Cloud can be challenging for Researchers with surprises on costs, security, governance, and right architectures. Similarly, Principal Investigators can have a challenging time managing the research program with collaboration, tracking, and control. Research Institutions will like to provide consistent and secure environments, standard approved products, and proper governance controls. The product was created to solve these common needs of Researchers, Principal Investigator and Research Institutions.


                                        • Available on AWS Marketplace and can be consumed in both SaaS as well as Enterprise mode
                                        • Provides a Self-Service Cloud Portal with the ability to manage the provisioning lifecycle of common research assets
                                        • Gives a real time visibility of the spend against the defined project budgets
                                        • The principal investigator has the ability to pause or stop the project in case the budget is exceeded till the new grant is approved

                                        In this blog, we explain how the product has been used to solve a common research problem of GEOS-Chem used for Earth Sciences. It covers a simple process that starts with access to large data sets on public S3 buckets, creation of an on-demand compute instance with the application loaded, copying the latest data for analysis, running the analysis, storing the output data, analyzing the same using specialized AI/ML tools and then deleting the instances. This is a common scenario faced by researchers daily, and the product demonstrates a simple Self-Service frictionless capability to achieve this with tight controls on cost and compliance.

                                        GEOS-Chem enables simulations of atmospheric composition on local to global scales. It can be used off-line as a 3-D chemical transport model driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Global Modeling Assimilation Office (GMAO). The figure below shows the basic construct on GEOS-Chem input and output analysis.



                                        Being a common use case, there is documentation available in the public domain by researchers on how to run GEOS-Chem on AWS Cloud. The product makes the process simpler using a Self-Service Cloud portal. To know more about similar use cases and advanced computing options, refer to AWS HPC for Scientific Research.



                                        Steps for GEOS-Chem Research Workflow on AWS Cloud
                                        Prerequisites for researcher before starting data analysis.

                                        • A valid AWS account and an access to the RG portal
                                        • A publicly accessible S3 bucket with large Research Data sets accessible
                                        • Create an additional EBS volume for your ongoing operational research work. (For occasional usage, it is recommended to upload the snapshot in S3 for better cost management.)
                                        • A pre-provisioned SageMaker Jupyter notebook to analyze output data

                                        Once done, below are the steps to execute this use case.

                                        • Login to the RG Portal and select the GEOS-Chem project
                                        • Launch an EC2 instance with GEOS-Chem AMI
                                        • Login to EC2 using SSH and configure AWS CLI
                                        • Connect to a public S3 bucket from AWS CLI to list NASA-NEX data
                                        • Run the simulation and copy the output data to a local S3 bucket
                                        • Link the local S3 bucket to AWS SageMaker instance and launch a Jupyter notebook for analysis of the output data
                                        • Once done, terminate the EC2 instance and check for the cost spent on the use case
                                        • All costs related to GEOS-Chem project and researcher consumption are tracked automatically

                                        Sample Output Analysis
                                        Once you run the output files on the Jupyter notebook, it does the compilation and provides output data in a visual format, as shown in the sample below. The researcher can then create a snapshot and upload it to S3 and terminate the EC2 instance (without deleting the additional EBS volume created along with EC2).

                                        Output to analyze loss rate and Air mass of Hydroxide pertaining to Atmospheric Science.


                                        Summary
                                        Scientific computing can take advantage of cloud computing to speed up research, scale-up computing needs almost instantaneously, and do all this with much better cost-efficiency. Researchers no longer need to worry about the expertise required to set up the infrastructure in AWS as they can leave this to tools like RLCatalyst Research Gateway, thus compressing the time it takes to complete their research computing tasks.

                                        The steps demonstrated in this blog can be easily replicated for similar other research domains. Also, it can be used to onboard new researchers with pre-built solution stacks provided in an easy to consume option. RLCatalyst Research Gateway is available in SaaS mode from AWS Marketplace and research institutions can continue to use their existing AWS account to configure and enable the solution for more effective Scientific Research governance.

                                        To learn more about GEOS-Chem use cases, click here.

                                        If you want to learn more about the product or book a live demo, feel free to contact marketing@relevancelab.com.

                                        References
                                        Enabling Immediate Access to Earth Science Models through Cloud Computing: Application to the GEOS-Chem Model
                                        Enabling High‐Performance Cloud Computing for Earth Science Modeling on Over a Thousand Cores: Application to the GEOS‐Chem Atmospheric Chemistry Model



                                        0

                                        HPC Blog, 2021 Blog, Blog, Featured

                                        AWS provides a comprehensive, elastic, and scalable cloud infrastructure to run your HPC applications. Working with AWS in exploring HPC for driving Scientific Research, Relevance Lab leveraged their RLCatalyst Research Gateway product to provision an HPC Cluster using AWS Service Catalog with simple steps to launch a new environment for research. This blog captures the steps used to launch a simple HPC 1.0 cluster on AWS and roadmap to extend the functionality to cover more advanced use cases of HPC Parallel Cluster.

                                        AWS delivers an integrated suite of services that provides everything needed to build and manage HPC clusters in the cloud. These clusters are deployed over various industry verticals to run the most compute-intensive workloads. AWS has a wide range of HPC applications spanning from traditional applications such as genomics, computational chemistry, financial risk modeling, computer-aided engineering, weather prediction, and seismic imaging to new applications such as machine learning, deep learning, and autonomous driving. In the US alone, multiple organizations across different specializations are choosing cloud to collaborate for scientific research.


                                        Similar programs exist across different geographies and institutions across EU, Asia, and country-specific programs for Public Sector programs. Our focus is to work with AWS and regional scientific institutions in bringing the power of Supercomputers for day-to-day researchers in a cost-effective manner with proper governance and tracking. Also, with Self-Service models, the shift needs to happen from worrying about computation to focus on Data, workflows, and analytics that requires a new paradigm of considering prospects of serverless scientific computing that we cover in later sections.

                                        Relevance Lab RLCatalyst Research Gateway provides a Self-Service Cloud portal to provision AWS products with a 1-Click model based on AWS Service Catalog. While dealing with more complex AWS Products like HPC there is a need to have a multi-step provisioning model and post provisioning actions that are not always possible using standard AWS APIs. In these situations requiring complex orchestration and post provisioning automation RLCatalyst BOTs provide a flexible and scalable solution to complement based Research Gateway features.

                                        Building blocks of HPC on AWS
                                        AWS offers various services that make it easy to set up an HPC setup.


                                        An HPC solution in AWS uses the following components as building blocks.

                                        • EC2 instances are used for Master and Worker nodes. The master nodes can use On-Demand instances and the worker nodes can use a combination of On-Demand and Spot Instances.
                                        • The software for the manager nodes is built as an AMI and used for the creation of Master nodes.
                                        • The agent software for the managers to communicate with the worker nodes is built into a second AMI that is then used for provisioning the Worker nodes.
                                        • Data is shared between different nodes using a file-sharing mechanism like FSx Lustre.
                                        • Long-term storage uses AWS S3.
                                        • Scaling of nodes is done via Auto-scaling.
                                        • KMS for encrypting and decrypting the keys.
                                        • Directory services to create the domain name for using HPC via UI.
                                        • Lambda function service to create user directory.
                                        • Elastic Load Balancing is used to distribute incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, Lambda functions, and virtual appliances.
                                        • Amazon EFS is used for regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs.
                                        • AWS VPC to launch the EC2 instances in private cloud.

                                        Evolution of HPC on AWS
                                        • HPC clusters first came into existence in AWS using the CfnCluster Cloud Formation template. It creates a number of Manager and Worker nodes in the cluster based on the input parameters. This product can be made available through AWS Service Catalog and is an item that can be provisioned from the RLCatalyst Research Gateway. The cluster manager software like Slurm, Torque, or SGE is pre-installed on the manager nodes and the agent software is pre-installed on the worker nodes. Also pre-installed is software that can provide a UI (like Nice EngineFrame) for the user to submit jobs to the cluster manager.
                                        • AWS Parallel Cluster is a newer offering from AWS for provisioning an HPC cluster. This service provides an open-source, CLI-based option for setting up a cluster. It sets up the manager and worker nodes and also installs controlling software that can watch the job queues and trigger scaling requests on the AWS side so that the overall cluster can grow or shrink based on the size of the queue of jobs.

                                        Steps to Launch HPC from RLCatalyst Research Gateway
                                        A standard HPC launch involves the following steps.

                                        • Provide the input parameters for the cluster. This will include
                                          • The compute instance size for the master node (vCPUs, RAM, Disk)
                                          • The compute instance size for the worker nodes (vCPUs, RAM, Disk)
                                          • The minimum and maximum number of worker nodes.
                                          • Select the workload manager software (Slurm, Torque, SGE)
                                          • Connectivity options (SSH keys etc.)
                                        • Launch the product.
                                        • Once the product is in Active state, connect to the URL in the Output parameters on the Product Details page. This connects you to the UI from where you can submit jobs to the cluster.
                                        • You can SSH into the master nodes using the key pair selected in the Input form.

                                        RLCatalyst Research Gateway uses the CfnCluster method to create an HPC cluster. This allows the HPC cluster to be created just like any other products in our Research Gateway catalog items. Though this provisioning may take upto 45 minutes to complete, it creates an URL in the outputs which we can use to submit the jobs through the URL.

                                        Advanced Use Cases for HPC

                                        • Computational Fluid Dynamics
                                        • Risk Management & Portfolio Optimization
                                        • Autonomous Vehicles – Driving Simulation
                                        • Research and Technical Computing on AWS
                                        • Cromwell on AWS
                                        • Genomics on AWS

                                        We have specifically looked at the use case that pertains to BioInformatics where a lot of the research uses Cromwell server to process workflows defined using the WDL language. The Cromwell server acts as a manager that controls the worker nodes, which execute the tasks in the workflow. A typical Cromwell setup in AWS can use AWS Batch as the backend to scale the cluster up and down and execute containerized tasks on EC2 instances (on-demand or spot).



                                        Prospect of Serverless Scientific Computing and HPC
                                        “Function As A Service” Paradigm for HPC and Workflows for Scientific Research with the advent of serverless computing and its availability on all major computing platforms, it is now possible to take the computing that would be done on a High Performance Cluster and run it as lambda functions. The obvious advantage to this model is that this virtual cluster is highly elastic, and charged only for the exact execution time of each lambda function executed.

                                        One of the limitations of this model currently is that only a few run-times are currently supported like Node.js and Python while a lot of the scientific computing code might be using additional run-times like C, C++, Java etc. However, this is fast changing and cloud providers are introducing new run-times like Go and Rust.


                                        Summary
                                        Scientific computing can take advantage of cloud computing to speed up research, scale-up computing needs almost instantaneously and do all this with much better cost efficiency. Researchers no longer worry about the expertise required to set up the infrastructure in AWS as they can leave this to tools like RLCatalyst Research Gateway, thus compressing the time it takes to complete their research computing tasks.

                                        To learn more about this solution or participate in using the same for your internal needs feel free to contact marketing@relevancelab.com

                                        References
                                        Getting started with HPC on AWS
                                        HPC on AWS Whitepaper
                                        AWS HPC Workshops
                                        Genomics in the Cloud
                                        Serverless Supercomputing: High Performance Function as a Service for Science
                                        FaaSter, Better, Cheaper: The Prospect of Serverless Scientific Computing and HPC



                                        0

                                        2021 Blog, AWS Governance, Cloud Blog, Blog, Featured, thank you

                                        Compliance on the Cloud is an important aspect in today’s world of remote working. As enterprises accelerate the adoption of cloud to drive frictionless business, there can be surprises on security, governance and cost without a proper framework. Relevance Lab (RL) helps enterprises speed up workload migration to the cloud with the assurance of Security, Governance and Cost Management using an integrated solution built on AWS standard products and open source framework. The key building blocks of this solution are.


                                        Why do enterprises need Compliance as a Code?
                                        For most enterprises, the major challenge is around governance and compliance and lack of visibility into their Cloud Infrastructure. They spend enormous time on trying to achieve compliance in a silo manner. Enterprises also spend enormous amounts of time on security and compliance with thousands of man hours. This can be addressed by automating compliance monitoring, increasing visibility across cloud with the right set of tools and frameworks. Relevance Labs Compliance as a Code framework, addresses the need of enterprises on the automation of these security & compliance. By a combination of preventive, detective and responsive controls, we help enterprises, by enforcing nearly continuous compliance and auto-remediation and there-by increase the overall security and reduce the compliance cost.

                                        Key tools and framework of Cloud Governance 360°
                                        AWS Control Tower: AWS Control Tower (CT) helps Organizations set up, manage, monitor, and govern a secured multi-account using AWS best practices. Setting up a Control Tower on a new account is relatively simpler when compared to setting it up on an existing account. Once Control Tower is set up, the landing zone should have the following.


                                        • 2 Organizational Units
                                        • 3 accounts, a master account and isolated accounts for log archive and security audit
                                        • 20 preventive guardrails to enforce policies
                                        • 2 detective guardrails to detect config violations

                                        Apart from this, you can customize the guard rails and implement them using AWS Config Rules. For more details on Control Tower implementation, refer to our earlier blog here.

                                        Cloud Custodian: Cloud Custodian is a tool that unifies the dozens of tools and scripts most organizations use for managing their public cloud accounts into one open-source tool. It uses a stateless rules engine for policy definition and enforcement, with metrics, structured outputs and detailed reporting for Cloud Infrastructure. It integrates tightly with serverless runtimes to provide real time remediation/response with low operational overhead.

                                        Organizations can use Custodian to manage their cloud environments by ensuring compliance to security policies, tag policies, garbage collection of unused resources, and cost management from a single tool. Custodian adheres to a Compliance as Code principle, to help you validate, dry run, and review changes to your policies. The policies are expressed in YAML and include the following.

                                        • The type of resource to run the policy against
                                        • Filters to narrow down the set of resources

                                        Cloud Custodian is a rules engine for managing public cloud accounts and resources. It allows users to define policies to enable a well managed Cloud Infrastructure, that’s both secure and cost optimized. It consolidates many of the ad hoc scripts organizations have into a lightweight and flexible tool, with unified metrics and reporting.



                                        Security Hub: AWS Security Hub gives you a comprehensive view of your security alerts and security posture across your AWS accounts. It’s a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions like Cloud Custodian. You can also take action on these security findings by investigating them in Amazon Detective or by using Amazon CloudWatch Event rules to send the findings to an ITSM, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or to custom remediation playbooks.




                                        Below is the snapshot of features across AWS Control Tower, Cloud Custodian and Security Hub, as shown in the table, these solutions complement each other across the common compliance needs.


                                        SI No AWS Control Tower Cloud Custodian Security Hub
                                        1 Easy to implement or configure AWS Control Tower within few clicks Light weight and flexible framework (Open source) which helps to deploy the cloud policies Gives a comprehensive view of security alerts and security posture across AWS accounts
                                        2 It helps to achieve “Governance at Scale” – Account Management, Security, Compliance Automation, Budget and Cost Management Helps to achieve Real-time Compliance and Cost Management It’s a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services
                                        3 Predefined Guardrails based on best practices – Establish / Enable Guardrails We need to define the rules and Cloud Custodian will enforce them Continuously monitors the account using automated security checks based on AWS best practices
                                        4 Guardrails are enabled at Organization level If an account has any specific requirement to either include or exclude certain policies, those exemptions can be handled With a few clicks in the AWS Security Hub console, we can connect multiple AWS accounts and consolidate findings across those accounts
                                        5 Automate Compliant Account Provisioning Can be included in Account creation workflow to deploy the set of policies to every AWS account as part of the bootstrapping process Automate continuous, account and resource-level configuration and security checks using industry standards and best practices
                                        6 Separate Account for Centralized logging of all activities across accounts Offers comprehensive logs whenever the policy is executed and can be stored to S3 bucket Create and customize your own insights, tailored to your specific security and compliance needs
                                        7 Separate Account for Audit. Designed to provide security and compliance teams read and write access to all accounts Can be integrated with AWS Config, AWS Security Hub, AWS System Manager and AWS X-Ray Support Diverse ecosystem of partner integrations
                                        8 Single pane view dashboard to get visibility on all OU’S, accounts and guardrails Needs Integration with Security Hub to view all the policies which have been implemented in regions / across accounts Monitor your security posture and quickly identify security issues and trends across AWS accounts in Security Hub’s summary dashboard


                                        Relevance Lab Compliance as a Code Framework
                                        Relevance Lab’s Compliance as a Code framework is an integrated model between AWS Control Tower (CT), Cloud Custodian and AWS Security Hub. As shown below, CT helps organizations with pre-defined multi-account governance based on the best practices of AWS. The account provision is standardized across your hundreds and thousands of accounts within the organization. By enabling Config rules, you can bring in the additional compliance checks to manage your security, cost and account management. To implement events and action based policies, Cloud Custodian is implemented as a complementary solution to the AWS CT which helps to monitor, notify and take remediation actions based on the events. As these policies run in AWS Lambda, Cloud Custodian enforces Compliance-As-Code and auto-remediation, enabling organizations to simultaneously accelerate towards security and compliance. The real-time visibility into who made what changes from where, enables us to detect human errors and non-compliance. Also take suitable remediations based on this. This helps in operational efficiency and brings in cost optimization.

                                        For eg: Custodian can identify all the non tagged EC2 instances or EBS volumes that are not mounted to an EC2 instance and notify the account admin that the same would be terminated in next 48 to 72 hours in case of no action. Having a Custom insight dashboard on Security Hub helps admin monitor the non-compliances and integrate it with an ITSM to create tickets and assign it to resolver groups. RL has implemented the Compliance as a Code for its own SaaS production platform called RLCatalyst Research Gateway, a custom cloud portal for researchers.



                                        Common Use Cases


                                        How to get started
                                        Relevance Lab is a consulting partner of AWS and helps organizations achieve Compliance as a Code, using the best practices of AWS. While enterprises can try and build some of these solutions, it is a time consuming activity and error prone and needs a specialist partner. RL has helped 10+ enterprises on this need and has a reusable framework to meet the security and compliance needs. To start with Customers can enroll for a 10-10 program which gives an insight of their current cloud compliance. Based on an assessment, Relevance Lab will share the gap analysis report and help design the appropriate “to-be” model. Our Cloud governance professional services group also provides implementation and support services with agility and cost effectiveness.

                                        For more details, please feel free to reach out to marketing@relevancelab.com



                                        0

                                        2021 Blog, AWS Platform, Blog, Featured

                                        The year 2020 undoubtedly brought in unprecedented challenges with COVID-19 that required countries, governments, business and people respond in a proactive manner with new normal approaches. Certain businesses managed to respond to the dynamic macro environment with a lot more agility, flexibility and scale. Cloud computing had a dominant effect in helping businesses stay connected and deliver critical solutions. Relevance Lab scaled up its partnership with AWS to align with the new focus areas resulting in a significant increase in coverage of products, specialized solutions and customer impacts in the past 12 months compared to what has been achieved in the past.

                                        AWS launched a number of new global initiatives in 2020 and in response to the business challenges resulting from the COVID-19 pandemic. The following picture describes those initiatives in a nutshell.


                                        Relevance Lab’s Focus areas have been very well aligned
                                        Given the macro environment challenges, Relevance Lab quickly aligned their focus on helping customers deal with the emerging challenges and those areas were very complementary to AWS’ global initiatives and responses listed above.

                                        Relevance Lab’s aligned its AWS solutions & services across four dominant themes and by leveraging RL’s own IP based platforms and deep cloud competencies. The following picture highlights those key initiatives and the native AWS products and services leveraged in the process.


                                        Relevance Lab also made major strides in improving Initiative to progress along AWS Partnership Maturity Levels for Specialized Consulting and Key Product offerings. The following picture highlights the major achievements in 2020 in a nutshell.



                                        AWS Specializations & Spotlights
                                        Relevance Lab is a specialist cloud managed services company with deep expertise in Devops, Service Automation and ITSM integrations. It also has an RLCatalyst Platform built to support Automation across AWS Cloud and ITSM platforms. RLCatalyst family of solutions help in Self-service Cloud portals, IT service monitoring and automation through BOTs. While maintaining multi-sector client base, we are also uniquely focused with existing solutions for Higher Education, Public Sector and Research sector clients.


                                        Spotlight-1: AWS Cloud Portals Solution
                                        Relevance Lab have developed a family of cloud portal solutions on top of our RLCatalyst platform. Cloud Portal solutions aim to simplify AWS consumption using self-service models with emphasis on 1-click provisioning / lifecycle management, dashboard views of budget / cost consumption and modeling personas, roles & responsibilities with a sector context.


                                        A unique feature of the above solutions is that they promote a hybrid model of consumption wherein users can bring their own AWS accounts (consumption accounts) under the framework of our cloud portal solution and benefit from being able to consume AWS for their educational and research needs in an easy self-service model.

                                        The solutions can be consumed as either an Enterprise or as a SaaS license. In addition, the solutions will be made available on AWS Marketplace soon.


                                        Spotlight-2: AWS Security Governance at Scale Framework
                                        The framework and deployment architecture uses AWS Control Tower as the foundational service and other closely aligned and native AWS products and services such as AWS Service Catalog, AWS Security Hub, AWS Budgets, etc. and addresses subject areas such as multi-account management, cost management, security, compliance & governance.

                                        Relevance Lab can assess, design and deploy or migrate to a fully secure AWS environment that lends itself to governance at scale. To encourage clients to adopt this journey, we have launched a 10-10 Program for AWS Security Governance that provides clients with an upfront blueprint of the entire migration or deployment process and end-state architecture so that they can make an informed decision


                                        Spotlight-3: Automated User Onboarding/Offboarding for Enterprises Use Case
                                        Relevance Lab is a unique partner that possesses deep expertise on AWS and ITSM platforms such as ServiceNow, freshservice, JIRA Service Desk, etc. The intersection of these platforms lends itself to relevant use cases for the industry. Relevance Lab has come up with a solution for automated User onboarding & offboarding in an enterprise context. This solution brings together multiple systems in a workflow model to accomplish user onboarding and offboarding tasks for an enterprise. It includes integration across HR systems, AWS Service Catalog and other services, ITSM platforms such as ServiceNow and assisted by Relevance Lab’s RLCatalyst BOTs engine to perform an end-to-end user onboarding/offboarding orchestration in an unassisted manner.


                                        Key Customer Use Cases and Success Stories in 2020
                                        Relevance Lab helped customers across verticals with growing momentum on Cloud adoption in the post COVID-19 situation to rewrite their digital solutions in adopting the new touchless interactions, remote distributed workforces and strong security governance solutions for enabling frictionless business.


                                        AWS Best Practices – Blogs, Campaigns, Technical Write-ups
                                        The following is a collection of knowledge articles and best practices published throughout the year related to our AWS centered solutions & services.

                                        AWS Service Management


                                        AWS Practice Solution Campaigns

                                        AWS Governance at Scale

                                        AWS Workspaces

                                        RLCatalyst Product Details

                                        AWS Infrastructure Automation

                                        AWS E-Learning & Research Workbench Solutions

                                        Cloud Networking Best Practices


                                        Summary
                                        The momentum of Cloud adoption in 2020 is quite likely to continue and grow in the new year 2021. Relevance Lab is a trusted partner in your cloud adoption journey driven by focus on following key specializations:

                                        • Cloud First Approach for Workload Planning
                                        • Cloud Governance 360 for using AWS Cloud the Right-Way
                                        • Automation Led Service Delivery Management with Self Service ITSM and Cloud Portals
                                        • Driving Frictionless Business and AI-Driven outcomes for Digital Transformation and App Modernization

                                        To learn more about our services and solutions listed above or engage us in consultative discussions for your AWS and other IT service needs, feel free to contact us at marketing@relevancelab.com



                                        0

                                        2021 Blog, command blog, Blog, Featured

                                        AWS X-Ray is an application performance service that collects data about requests that your application processes, and provides tools to view, filter, and gain insights into that data to identify issues and opportunities for optimization. It enables a developer to create a service map that displays an application’s architecture. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs. It is compatible with microservices and serverless based applications.

                                        The X-Ray SDK provides

                                        • Interceptors to add to your code to trace incoming HTTP requests
                                        • Client handlers to instrument AWS SDK clients that your application uses to call other AWS services
                                        • An HTTP client to use to instrument calls to other internal and external HTTP web services

                                        The SDK also supports instrumenting calls to SQL databases, automatic AWS SDK client instrumentation, and other features.

                                        Instead of sending trace data directly to X-Ray, the SDK sends JSON segment documents to a daemon process listening for UDP traffic. The X-Ray daemon buffers segments in a queue and uploads them to X-Ray in batches. The daemon is available for Linux, Windows, and macOS, and is included on AWS Elastic Beanstalk and AWS Lambda platforms.

                                        X-Ray uses trace data from the AWS resources that power your cloud applications to generate a detailed service graph. The service graph shows the client, your front-end service, and corresponding backend services to process requests and persist data. Use the service graph to identify bottlenecks, latency spikes, and other issues to solve to improve the performance of your applications.

                                        AWS X-Ray Analytics helps you quickly and easily understand

                                        • Any latency degradation or increase in error or fault rates
                                        • The latency experienced by customers in the 50th, 90th, and 95th percentiles
                                        • The root cause of the issue at hand
                                        • End users who are impacted, and by how much
                                        • Comparisons of trends based on different criteria. For example, you could understand if new deployments caused a regression

                                        How AWS X-Ray Works
                                        AWS X-Ray receives data from services as segments. X-Ray groups the segments that have a Common request into traces. X-Ray processes the traces to generate a service map which provides a visual depiction of the application

                                        AWS X-Ray features

                                        • Simple setup
                                        • End-to-end tracing
                                        • AWS Service and Database Integrations
                                        • Support for Multiple Languages
                                        • Request Sampling
                                        • Service map

                                        Benefits of Using AWS X-Ray

                                        Review Request Behaviour
                                        AWS X-Ray traces customers’ requests and accumulates the information generated by the individual resources and services, which makes up your application, granting you an end-to-end view on the actions and performance of your application.

                                        Discover Application Issues
                                        Having AWS X-Ray, you could extract insights about your application performance and finding out root causes. As AWS X-Ray is having tracing features, you can easily follow request paths to diagnose where in your application and what is creating performance issues.

                                        Improve Application Performance
                                        AWS X-Ray’s service maps allow you to see connection between resources and services in your application in actual time. You could simply notice where high latencies are visualizing node, occurring and edge latency distribution for services, and after that, drilling down into the different services and paths having impact on the application performance.

                                        Ready to use with AWS
                                        AWS X-Ray operates with Amazon EC2 Container Service, Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda. You could utilize AWS X-Ray with applications composed in Node.js, Java, and .NET, which are used on these services.

                                        Designed for a Variety of Applications
                                        AWS X-Ray operates for both simple and complicated applications, either in production or in development. With X-Ray, you can simply trace down the requests which are made to the applications that span various AWS Regions, AWS accounts, and Availability Zones.

                                        Why AWS X-Ray?
                                        Developers spend a lot of time searching through application logs, service logs, metrics, and traces to understand performance bottlenecks and to pinpoint their root causes. Correlating this information to identify its impact on end users comes with its own challenges of mining the data and performing analysis. This adds to the triaging time when using a distributed microservices architecture, where the call passes through several microservices. To address these challenges, AWS launched AWS X-Ray Analytics.

                                        X-Ray helps you analyze and debug distributed applications, such as those built using a microservices architecture. Using X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root causes of performance issues and errors. It helps you debug and triage distributed applications wherever those applications are running, whether the architecture is serverless, containers, Amazon EC2, on-premises, or a mixture of all of these.

                                        Relevance Lab is a specialist AWS partner and can help Organizations in implementing the monitoring and observability framework including AWS X-ray to ease the application management and help identify bugs pertaining to complex distributed workflows.

                                        For a demo of the same, please click here

                                        For more details, please feel free to reach out to marketing@relevancelab.com



                                        0

                                        NO OLD POSTSPage 2 of 2NEXT POSTS