Your address will show here +12 34 56 78
2021 Blog, Blog, Feature Blog, Featured

Bioinformatics is a field of computational science that involves the analysis of sequences of biological molecules (DNA, RNA, or protein). It’s aimed at comparing genes and other sequences within an organism or between organisms, looking at evolutionary relationships between organisms, and using the patterns that exist across DNA and protein sequences to elucidate their function. Being an interdisciplinary branch of the life sciences, bioinformatics integrates computer science and mathematical methods to reveal the biological significance behind the continuously increasing biological data. It does this by developing methodology and analysis tools to explore the large volumes of biological data, helping to query, extract, store, organize, systematize, annotate, visualize, mine, and interpret complex data.

The advances in Cloud computing and availability of open source genomic pipeline tools have provided researchers powerful tools to speed up processing of next-generation sequencing. In this blog, we explain leveraging the RLCatalyst Research Gateway portal to help researchers focus on science and not servers while dealing with NGS and popular pipelines like RNA-Seq.

Steps and Challenges of RNA-Seq Analysis
Any Bioinformatics analysis involving next-generation Sequencing, RNA-Seq (named as an abbreviation of “RNA Sequencing”) constitutes of these following steps:


  • Mapping of millions of short sequencing reads to a reference genome, including the identification of splicing events
  • Quantifying expression levels of genes, transcripts, and exons
  • Differential analysis of gene expression among different biological conditions
  • Biological interpretation of differentially expressed genes

As seen from the figure below, the RNA-Seq analysis for identification of differentially expressed genes can be carried out in one of the three (A, B, C) protocols, involving different sets of bioinformatics tools. In study A, one might opt for TopHat, STAR, and HISAT for alignment of sequences and HTSeq for quantification, whereas the same set of steps can be performed by using Kalisto and Salmon tools (Study B) or in combination with CuffLinks (Study C) all of these yields the same results which are further used in the identification of differentially expressed genes or transcripts.


Each of these individual steps is executed using a specific bioinformatics tool or set of tools such as STAR, RSEM, HISAT2, or Salmon for gene isoform counting and extensive quality control of the sequenced data. The major bottlenecks in RNA-Seq data analysis include manual installations of software, deployment platforms, or computational capacity and cost.

Looking at the vast number of tools available for a single analysis and different versions and their compatibility makes the setup tricky. This can also be time-consuming as proper configuration and version compatibility assessment take several months to complete.

Nextflow: Solution to Bottleneck
The most efficient way to tackle these hurdles is by making use of Nextflow based pipelines that support cloud computing where virtual systems can be provisioned at a fraction of the cost, and the setup is seemingly smoother that can be done by a single individual, as well as support for container systems (Docker and Singularity).

Nextflow is a reactive workflow framework and a programming DSL (Domain Specific Language) that eases the writing of data-intensive computational pipelines.

As seen in the diagram below, the infrastructure to use Nextflow in the AWS cloud consists of a head node (EC2 instance with Nextflow and Nextflow Tower open source software installed) and wired to an AWS Batch backend to handle the tasks created by Nextflow. AWS Batch creates worker nodes at run-time, which can be either on-demand instances or spot instances (for cost-efficiency). Data is stored in an S3 bucket to which the worker nodes in AWS Batch connect and pull the input data. Interim data and results are also stored in S3 buckets, as is the output. The pipeline to be run (e.g. RNA-Seq, DualRNA-Seq, ViralRecon, etc.) is pulled by the worker nodes as a container image from a public repo like DockerHub or BioContainers.

RLCatalyst Research Gateway takes care of provisioning the infrastructure (EC2 node, AWS Batch compute environment, and Job Queues) in the AWS cloud with all the necessary controls for networking, access, data security, and cost and budget monitoring. Nextflow takes care of creating the job definitions and submitting the tasks to Batch at run-time.

The researcher initiates the creation of the workspace from within the RLCatalyst Research Gateway portal. There is a wide selection of parameters as input including, which pipeline to run, tuning parameters to control the sizing and cost-efficiency of the worker nodes, the location of input and output data, etc. Once the infrastructure is provisioned and ready, the researcher can connect to the head node via SSH and launch Nextflow jobs. The researcher can also connect to the Nextflow Tower UI interface to monitor the progress of jobs.


The pre-written Nextflow pipelines can be pulled from an nf-core GitHub repository and can be set up within minutes allowing the entire analysis to run using a single line command, and the results of each step are displayed on the command line/shell. Configuration of the resources on the cloud is seamless as well, since Nextflow based pipelines provide support for batch computing, enabling the analysis to scale as it progresses. Thus, the researchers can focus on running the pipeline and analysis of output data instead of investing time in setup and configurations.

As seen from the pipeline output (MultiQC) report of the Nextflow-based RNA-Seq pipeline below, we can identify the sequence quality by looking at FastQC scores, identify duplication scenarios based on the contour plots as well as pinpoint the genotypic biotypes along with fragment length distribution for each sample.


RLCatalyst Research Gateway enables the setting up and provisioning of AWS cloud resources with few simple clicks for such analysis, and the output of each run is saved in a S3 bucket enabling easy data sharing. These provisioned resources are pre-configured setups with a proper design template and security architecture and added to these features. RLCatalyst Research Gateway enables cost tracking for the currently running projects, which can be paused/ stopped or deleted as per convenience.

Steps for Running Nextflow-Based Pipelines in AWS Cloud for Genomic Research
Prerequisites for a researcher before starting data analysis.

  • A valid AWS account and access to the RLCatalyst Research Gateway portal
  • A publicly accessible S3 bucket with large Research Data sets accessible

Once done, below are the steps to execute this use case.

  • Login to the RLCatalyst Research Gateway Portal and select the project linked to your AWS account
  • Launch the Nextflow-Advanced product
  • Login to the head node using SSH (Nextflow software will already be installed on this node)
  • In the pipeline folder, modify the nextflow.config file to set the data location according to your needs (Github repo, S3 bucket, etc.). This can also be passed via the command line
  • Run the Nextflow job on the head node. This should automatically cause Nextflow to submit jobs to the AWS Batch backend
  • Output data will be copied to the Output bucket specified
  • Once done, terminate the EC2 instance and check for the cost spent on the use case
  • All costs related to the Nextflow project and researcher consumption are tracked automatically

Key Points

  • Bioinformatics involves developing methodology and analysis tools to analyze large volumes of biological data
  • Vast number of tools available for a single analysis and their compatibility make the analysis setup tricky
  • RLCatalyst Research Gateway enables the setting up and provisioning of Nextflow based pipelines and AWS cloud resources with few simple clicks

Summary
Researchers need powerful tools for collaboration and access to commonly used NGS pipelines with large data sets. Cloud computing makes it much easier with access to workflows, data, computation, and storage. However, there is a learning curve for researchers to use Cloud-specific knowhow and how to use resources optimally for large-scale computations like RNA-Seq analysis pipelines that can also be quite costly. Relevance Lab working closely with AWS partnership has provided RLCatalyst Research Gateway portal to use commonly used pre-built Nextflow pipeline templates and integration with open source repositories like nf-core and biocontainers. RLCatalyst Research Gateway enables execution of such Nextflow-based scalable pipelines on the cloud with few clicks and configurations with cost tracking and resource execution control features. By using AWS Batch the solution is very scalable and optimized for on-demand consumption.

For more information, please feel free to write to marketing@relevancelab.com.

References



0

2021 Blog, Blog, Feature Blog, Featured

As enterprises continue to rapidly adopt AWS cloud, the complexity and scale of operations on the AWS have increased exponentially. Enterprises now operate hundreds and even thousands of AWS accounts to meet their enterprise IT needs. With this in mind, AWS Management & Governance has emerged as a major focus area that enterprises need to address in a holistic manner to ensure efficient, automated, performant, available, secure, and compliant cloud operations.


Governance360 integrated with Dash ComplyOps

Governance360
Relevance Lab has recently launched its Governance360 professional services offering in the AWS Marketplace. This offering builds upon Relevance Lab’s theme of helping customers adopt AWS the right way.

Governance360 brings together the framework, tooling, and process for implementing a best-practices-based AWS Management & Governance at scale for multi-account AWS environments. It helps clients seamlessly manage their “Day after Cloud” operations on an ongoing basis. The tooling that would be leveraged for implementing Governance360 can include AWS’s native tools, services, RL’s tools, and third-party industry tools.

Typically a Governance360 type of professional service is engaged either during or after the phase of customers’ transition to AWS cloud (Infra & application migration or development on AWS Cloud).

Dash ComplyOps Platform
Dash ComplyOps platform enables and automates the lifecycle of a client’s journey for compliance of their AWS environments towards industry-specific compliance requirements such as HIPAA, HITRUST, SOC2, GDPR. Dash ComplyOps platform provides organizations with the ability to manage a robust cloud security program through the implementation of guardrails and controls, continuous compliance monitoring, and advanced reporting and remediation of security and compliance issues.


Relevance Lab and Dash Solutions have partnered together to bring an end-to-end solution and professional service offering that helps customers realize an automated AWS Management & Governance posture for their environments meeting regulatory compliance needs.
As a part of this partnership, the Dash ComplyOps platform is integrated within the overall Governance360 framework. The detailed mapping of features, tooling, and benefits (including Dash ComplyOps as a tool) across Governance360’s major topic areas is articulated in the table below.

Benefits


Governance360 Topic Benefits
Automation Lifecycle
  • Automate repetitive and time-consuming tasks
  • Automated setup of environments for common use cases such as regulatory, workloads, etc
  • Codify best practices learned over time
  • Control Services
  • Automated & Standardized Account Provisioning
  • Cost & Budget Management
  • Architecture for Industry Standard Compliance, Monitoring, and Remediation
  • Disaster Recovery
  • Automated & Continuous Compliance Monitoring, Detection, and Remediation
  • Proactive Monitoring
  • Dashboards for monitoring AWS Environment from infrastructure to application
  • Security Management
  • Ease of Deployment of Security Controls @ Scale using CI/CD Pipeline
  • Infra and Application Security Threat Monitoring, Prevention, Detection & Remediation
  • Service & Asset Management
  • Software and Asset Management practice with real-time CMDB for Applications & Infrastructure
  • Incident management and auto-remediation
  • Workload Migration & Management
  • Best practices-based workload migration and implementations on AWS cloud
  • Regulatory Compliance
  • Compliance with industry regulatory standards


  • Engagement Flow

    Engagement Flow / Phase Details Typical Duration*
    Discovery & Assessment
  • Understand current state, data, management & governance goals
  • 1-3 weeks
    Analysis & Recommendation
  • Requirement analysis, apply Governance360 framework, create recommendations & obtain client sign off
  • Recommendations include services, tools, and dashboards & expected outcomes, benefits
  • Use of native AWS services, RL’s monitoring & BOTs, Dash ComplyOps platform, and other 3rd party tools
  • 1-3 weeks
    Implementation
  • Implement, test, UAT & production cutover of recommended services, tools, and dashboards
  • 2-8 weeks
    Hypercare
  • Post-implementation support – monitor and resolve any issues faced
  • 1-2 weeks

    * Duration depends on the complexity and scope of the requirements.

    Summary
    Relevance Lab is a consulting partner of AWS and helps organizations achieve automation-led Cloud Management using Governance360, based on the best practices of AWS. For enterprises with regulatory compliance needs, integration with the Dash ComplyOps platform provides an advanced setup for operation in a multi-account environment. While enterprises can try to build some of these solutions, it is both time-consuming and error-prone and demands a specialist partner. Relevance Lab has helped multiple enterprises with this need and has a reusable automated solution and pre-built library to meet the security and compliance needs of any organization.

    For more details, please feel free to contact marketing@relevancelab.com.

    References
    Governance 360 – Are you using your AWS Cloud “The Right Way”
    AWS Security Governance for enterprises “The Right Way”
    Dash ComplyOps by Dash Solutions
    Governance360 available on AWS Marketplace



    0

    2021 Blog, Blog, Feature Blog, Featured

    Provide researchers access to secure RStudio instances in the AWS cloud by using Amazon issued certificates in AWS Certificate Manager (ACM) and an Application Load Balancer (ALB)

    Cloud computing offers the research community access to vast amounts of computational power, storage, specialized data tools, and public data sets, collectively referred to as Research IT, with the added benefit of paying only for what is used. However, researchers may not be experts in using the AWS Console to provision these services in the right way. This is where software solutions like Service Workbench on AWS (SWB) make it possible to deliver scientific research computing resources in a secure and easily accessible manner.

    RStudio is a popular software used by the Scientific Research Community and supported by Service Workbench. Researchers use RStudio very commonly in their day-to-day efforts. While RStudio is a popular product, the process of installing RStudio securely on AWS Cloud and using it in a cost-effective manner is a non-trivial task, especially for Researchers. With SWB, the goal is to make this process very simple, secure, and cost-effective for Researchers so that they can focus on “Science” and not “Servers” thereby increasing their productivity.

    Relevance Lab (RL), in partnership with AWS, set out to make the experience of using RStudio with Service Workbench on AWS simple and secure.

    Technical Solution Goals

    1. A researcher should be able to launch an RStudio instance in the AWS cloud from within the Service Workbench portal.
    2. The RStudio instance comes fully loaded with the latest version of RStudio and a variety of other software packages that help in scientific research computing.
    3. The user launches a URL to the RStudio from within the Service Workbench. This URL is a unique URL generated by SWB and is encoded with an authentication token that ensures that the researcher can access the RStudio instance without remembering any passwords. The URL is served over SSL so that all communications can be encrypted in transit.
    4. Maintaining the certificates used for SSL communication should be cost-effective and should not require excessive administrative efforts.
    5. The solution should provide isolation of researcher-specific instances using allowed IP lists controlled by the end-user.

    Comparison of Old and New Design Principles to make Researcher Experience Frictionless
    The following section summarizes the old design and the new architecture to make the entire researcher experience frictionless. Based on feedback from researchers, it was felt that the older design required a lot of setup complexity and lifecycle upgrades for security certificate management, slowing down researchers productivity. The new solution makes the lifecycle simple and frictionless along with smart and innovative features to keep ongoing costs optimized.


    No. RStudio Feature Original Design Approach New Design Approach
    1 User Generated Security Certificate for SSL Secure Connections to RStudio. Users have to create a certificate (like LetsEncrypt) and use it with RStudio EC2 Instance with NGINX server. This creates complexity in the Certificate lifecycle. Complex for end-users to create, maintain and renew. The RStudio AMI also needs to manage the Certificate lifecycle. Move from External certificates to AWS ACM.

    Bring in a shared AWS ALB (Application Load Balancer) and use AWS ACM certificates for each Hosting Account to simplify the Certificate Management Lifecycle.
    2 SSL Secure Connection. Create an SSL connection with Nginx Server on RStudio EC2. Related to custom certificate management. Replaced with ALB at an Account level and shared by all RStudio Instances in an account. User Portal to ALB connection secured by ACM. For ALB to RStudio EC2 secure connection, use unique self-signed Certificates to encrypt connection per RStudio.
    3 Client Role (IAM) changes in SWB. Client role is provided necessary permissions for setup purposes. Additional role privileges added to handle ALB.
    4 ALB Design. Not existing in the original design. Shared ALB design per Hosting Account to be shared between Projects. Each ALB is expected to cost about $20-50 monthly in shared mode with average use. API model used to create/delete ALB.
    5 Route 53 Changes on the Main account. A CNAME record gets created with the EC2 DNS name. A CNAME record gets created with the ALB DNS name.
    6 RStudio AMI. Embedded with Certificate details. Related to custom certificate management. Independent of user-provided Certificate details. Also, AMI has been enhanced to include the following: Self-signed SSL and additional packages (as commonly requested by researchers) are baked into the AMI.
    7 RStudio Cloud Formation Template (CFT). Original one to be removed from SWB. Added a new output to indicate the “Need ALB” flag. Also, create a new target group to which the ALB can route requests.
    8 SWB Hosting Account Configuration. Did not have to provision certificate AWS ACM. Manual process to set up a certificate in a new hosting account.
    9 Provisioned RStudio per Hosting Account Active Count Tracking. None. Needed to ensure ALB is created the first time when RStudio is provisioned and deleted after the last RStudio is deleted to optimize cost overheads of ALB.
    10 SWB DynamoDB Table Changes. DynamoDB used for all Tables by SWB. Modifications needed to support the new design. Added to the existing DeploymentStore table in SWB design.
    11 SWB Provision Environment Workflow. Standard design. Additional Step added to check if “Workspace Type” needs ALB and if it does when checking for ALB and either create or pass the reference to existing one.
    12 SWB Terminate Environment Workflow. Standard design. Additional Step added to check if last Active RStudio being deleted and if so, also delete ALB to reduce idle costs.
    13 Secure “Connection” Action from SWB Portal to RStudio instance. To ensure each RStudio has a secure connection for each user a unique connection URL is generated during the user session that is valid for a limited period. The same design of the original implementation is preserved. Internally the routing is managed through ALB but the concept remains the same. This ensures users do not have to remember user id/password for RStudio and a secure connection is always made available.
    14 Secure “Connection” from SWB Portal disallowing other users from accessing RStudio resources given shared ALB. NA. Using the design feature (Step-13) ensures that even post ALB the connection for a User (Researcher and PI) is still restricted to their provisioned RStudio only and they cannot access other Researchers Instances. The unique connection is system generated using User to RStudio mapping uniquely.
    15 ALB Routing Rules for RStudio secure connections given shared nature. NA. Every time an RStudio is created or deleted, changes are made to ALB rules to allow a secure connection between the User session and the linked RStudio. The same rules are cleaned up during RStudio delete lifecycle. These changes to ALB routing rules are managed from SWB code under Workflow customizations. (Step-11 and 12) using APIs.
    16 RStudio Configuration parameters related to CIDR. Original design allows only whitelisted IP addresses to connect to associated RStudio instances – this can be modified also from configurations. RStudio Cloud Formation Template (CFT) should take Classless Inter-Domain Routing (CIDR) as Input Parameter and pass it through as an Output Parameter for the SWB to take it and create the ALB Listener Rule.
    SWB code will take CIDR from RStudio CFT output, subsequently, update the ALB Listener Rule with the respective Target Group.
    17 Researcher costs tracking. The original design had RStudio costs tracked for Researchers. Custom certificate costs were not tracked if any. In the new design, RStudio costs are tagged and tracked per researcher. ALB costs are treated as shared costs for the Hosting account.
    18 RStudio Packaging and Delivery for a new customer – Repository Model. Bundled with standard SWB repo and installed. New model for RL to create a separate Repo and host RStudio with associated documentation and templates for customers to use.
    19 RStudio Packaging and Delivery for a new customer – AWS Marketplace model. None. RL to provide RStudio on AWS Marketplace for SWB customers to add to standard Service Catalog and import (Future Roadmap item).
    20 Upgrade and Support Models for RStudio. SWB teams ownership. To be managed by RL teams.
    21 UI Modification for Partner Provided Products. No partner provided products. Partner-provided products will reside in the self-hosted repo. SWB UI will provide a mechanism to show details of partner names and a link to additional information.


    The diagram below explains the interplay between different design components.


    Secure and Scalable Solution Architecture
    Keeping in mind the above design goals, a secure and scalable architecture is implemented that solves the problem of shared groups using products like RStudio requiring secure HTTPS access without the overheads of individual certificate management. The architecture also enables sharing the same concept for all future researcher products with similar needs without any additional implementation overheads resulting in increased productivity and lower costs.


    The Relevance Lab team designed a solution centered on an EC2 Linux instance with RStudio and relevant packages pre-installed and delivered as an AMI.

    1. When the instance is provisioned, it is brought up without a public IP address.
    2. All traffic to this instance is delivered via an Application Load Balancer (ALB). The ALB is shared across multiple RStudio instances within the same account to spread the cost over a larger number of users.
    3. The ALB serves over an SSL link secured with an Amazon-issued certificate which is maintained by AWS Certificate Manager.
    4. The ALB costs are further brought down by provisioning it on demand when the first RStudio instance is provisioned. Conversely, the ALB is de-provisioned when the last RStudio instance is de-provisioned.
    5. Traffic between the ALB and the RStudio instance is also secured with an SSL certificate which is self-signed but unique to each instance.
    6. The ALB listener rules enforce the IP allowed list configured by the user.

    Conclusion
    Both SWB and Relevance Lab RLCatalyst Research Gateway teams are committed to making scientific research frictionless for researchers. With a shared goal, this new initiative speeds up collaboration and will help provide new innovative open-source solutions leveraging Service Workbench on AWS and partner-provided solutions like this RStudio with ALB from Relevance Lab. The collaboration efforts will soon be adding more solutions covering Genomic Pipeline Orchestration with Nextflow, use of HPC Parallel Cluster, and secure research workspaces with AppStream 2.0, so stay tuned.

    To get started with RStudio on SWB provided by Relevance Lab use the following link:
    Relevance Lab Github Repository for SWB Templates

    For more information, feel free to contact marketing@relevancelab.com.

    References
    Service Workbench on AWS for driving Scientific Research
    Service Workbench on AWS Documentation
    Service Workbench on AWS Github Repository
    RStudio Secure Architecture Patterns
    Relevance Lab Research Gateway



    0