Your address will show here +12 34 56 78
2022 Blog, Blog, Featured, Feature Blog

While there are a lot of talks about Digital Innovation leveraging the cloud, another key disruption in the industry is Applied Science Innovation, led by Scientists and Engineers targeting a broad range of disciplines in Engineering and Medicine. Relevance Lab is proud to now ease the leverage of power tools like High-Performance Computing (HPC) and Quantum Computing on AWS Cloud for such pursuits with our Research Gateway product.

What is Applied Science?
Applied Science uses existing scientific knowledge to solve day-to-day problems in areas like Health Care, Space, Environment, Transportation, etc. It leverages the power of new technologies such as Big Compute and Cloud to drive faster scientific research. Innovation in Applied Science has some unique differences compared to Digital Innovation:


  • Users of Applied Science are researchers, scientists, and engineers
  • Workloads for Applied Science are driven by more specialized systems and domain-specific algorithms & orchestration needs
  • Very large domain-specific data sets and collaboration with a large ecosystem of global communities is a key enabler with a focus on open-source and knowledge sharing
  • Use of specialized hardware and software is also a key enabler

The term Big Compute is used to describe large-scale workloads that require multiple cores (with specialized CPU and GPU types) working with very high-speed network and storage architectures. Such Big Compute architectures solve the problems in image processing, fluid dynamics, financial risk modeling, oil exploration, drug design, etc.

Relevance Lab is working closely with AWS in pursuing specialized use cases for Applied Science and Scientific Research using Cloud. A number of government, public and private sector organizations are focussing large amounts of investment and scientific knowledge on driving innovation in these areas. A few specialized ones with well-known programs are listed below.


What is High Performance Computing?
Supercomputers of the past were very specialized and high-cost systems that could only be built and afforded by large and well-funded institutions. Cloud computing is driving the democratization of supercomputers by providing High Performance Computing (HPC) systems that have specialized architectures. It combines the power of on-demand computing with large & specialized CPU/GPU types, high-speed networking, fast access storage, and associated tools & utilities for workload orchestration and management. The figure below shows the key building blocks of HPC components of AWS Cloud.


What is Quantum Computing?
Quantum computing relies upon quantum theory, which deals with physical phenomena at the nano-scale. One of the most important aspects of quantum computing is the quantum bit (Qubit), a unit of quantum information that exists in two states (horizontal and vertical polarization) at the same time, thanks to the superposition principle of quantum physics.

The Amazon Braket quantum computing service helps researchers and developers use quantum computers and simulators to build quantum algorithms on AWS.


Key Use Cases:

  • Research quantum computing algorithms
  • Test different quantum hardware
  • Build quantum software faster
  • Explore industry applications

What Do Customers Want?
The availability of specialized services like HPC and Quantum Computing has made it extremely simple for customers to be able to consume these advanced technologies and build their own supercomputers. However, when it comes to the adoption cycle, customers are hesitant to adopt the same due to key concerns and asks, as summarized below:

Operational Asks:

  • The top challenge and fear on the cloud is the variable cost model, which can throw a big surprise, and customers want strong Cost Management & Tracking with auto-limits control
  • Security and data governance are also key priorities
  • Data transfer and management are the other key needs

Functional Asks:
  • Faster and easier design, provisioning, and development cycles
  • Integrated and automated tools for deployment and monitoring
  • Easy access to data and the ability to do self-service
  • Derive increased business value from Data Analytics and Machine Learning

How Does Research Gateway Solve Customer Needs?
AWS cloud offerings provide a strong platform for HPC and quantum computing requirements. However, enabling Scientific Research and Training of Researchers requires an ability to offer these with a Self-Service Portal that encapsulates the underlying complexity. On top of proper cost tracking and controlling, security, data management, and an integrated workbench are needed for a collaborative research environment.

To address the above needs, Relevance Lab has developed Research Gateway. It helps scientists accelerate their research on the AWS cloud with access to research tools, data sets, processing pipelines, and analytics workbenches in a frictionless manner. The solution also addresses the need for tight control on a budget, data security, privacy, and regulatory compliances, which it meets while significantly simplifying the process of running complex scientific research workloads.

Research Gateway meets the following key dimensions of collaborative and secure scientific research:

  • Cost and Budget Governance: The solution offers easy control over Cost Tracking of Research Cloud resources to track, analyze, control, and optimize budget spending. Principal Investigators can also pause or stop the budget if it exceeds the set threshold.
  • Research Data & Tools for Easy Collaboration: Research Gateway provides the team of researchers real-time view of research-specific product catalog, cost, and governance, reducing the complexities of running scientific research on the cloud.
  • Security and Compliance: Principal investigators have a unified view and control over security and compliance, covering Identity management, data privacy, audit trails, encryption, and access management.

Principal investigators leading the research get a quick insight into the total budget, consumed budget, and available budget, along with the available research-specific products, as shown in the image below.

With Research Gateway, researchers can provision available research-specific products for their high-performance and quantum computing needs in just 1-click, launching scientific research as quickly as 30 minutes or less.


Summary
High Performance Computing and Quantum computing are essential to the advancement of science and engineering now more than ever. Research Gateway provides fundamental building blocks for Applied Science and Scientific Research in the AWS cloud by simplifying the availability of HPC and Quantum computing for customers. The solution helps create democratized supercomputers on-demand while eliminating the pain of managing infrastructure, data, security, and costs, enabling researchers to focus on science.

To know more about how you can high-performance and quantum computing with just 1-click and launch your research in 30 minutes using our solution at https://research.rlcatalyst.com, feel free to contact marketing@relevancelab.com

References
High-performance genetic datastore on AWS S3 using Parquet and Arrow
Parallelizing Genome Variant Analysis
Leveraging AWS HPC for Accelerating Scientific Research on Cloud
Genomics Cloud on AWS with RLCatalyst Research Gateway
Enabling Frictionless Scientific Research in the Cloud with a 30 Minutes Countdown Now!
Accelerating Genomics and High Performance Computing on AWS with Relevance Lab Research Gateway Solution



0

2022 Blog, Blog, Featured, Feature Blog

Cloud is no longer a “good-to-have” technology but rather a must-have for enterprises. Although cloud-led digital transformation has been a buzzword for years, enterprises had their own pace of cloud adoption. However, the pandemic necessitated the acceleration of cloud adoption. Enterprises are faced with a new normal of operation that requires the speed and agility of the cloud.

In this blog, we will discuss the ground realities and challenges. We will also explore how Relevance Lab (RL) offers the right mix of experience and proven approaches to grow in today’s hyper-agile industry environment.

A Changed Ground Reality
Pandemic has accelerated how organizations look at IT infrastructure spending. It has also permanently changed their cloud strategies & spending habits. Online reports suggest that 38% more companies took a cloud-first approach compared to 2020 with an increased focus on IaaS and PaaS-based approaches.

According to a Gartner online survey, enterprises have preponed their cloud adoption by several years and this is expected to continue in the near future. The survey also predicts that enterprises will spend more on a just-in-time, value-based adoption to match the demands of a hyper-competitive environment.

Migration and modernization with the cloud is a long-term trend, especially for enterprises with a need to scale up. As CAPEX takes a back seat, OPEX is now at the forefront. Cloud as an industry has matured and evolved over a period of time, enabling faster and better adoption with hyper accelerator tools.

Criteria for the Successful Cloud Journey
The success of an enterprise’s cloud adoption journey can be evaluated by setting and measuring against the right KPIs. A successful cloud journey would help an enterprise achieve “business as usual” along with enhanced business outcomes and customer experience. It standardizes the framework for maintainability and traceability, improves security, and optimizes the cost of ownership, as shown in the image below.


Common Cloud Migration Challenges
Planning for and meeting all the criteria of a successful cloud journey has always been an uphill task. Some of the common challenges are:

Large Datasets: Businesses today are dealing with larger and more unstructured datasets than ever before.

Selection of Right Migration Model: Many enterprises, starting their cloud journey, have to choose the right migration model for adoption as per their needs, such as legacy re-write, lift & shift, and everything in between. The decision is based on various different factors like cost, business outlook, etc, and can impact business performance and operations in the longer run.

Change Management for Adopting a New Way of Operation: Cloud migration requires businesses to expand their knowledge at a rapid rate along with real-time analytics & personalization.

Security Framework: The risk of hackers & security attacks is growing across most industries. To keep up with the security while successfully moving to the cloud, enterprises need robust planning and an action list. Also, enterprises must choose a security framework depending on their size, industry, compliance, and governance needs.

Lack of Proper Planning: Rushed application assessments give rise to a lot of gaps that can affect the cloud environment. As a move into the cloud impacts different verticals and businesses as a whole, all stakeholders must be on the same page when it comes to an adoption plan.

Profound Knowledge: Cloud migration requires a dedicated and experienced team to troubleshoot any problems. While building an in-house team is a time-consuming, costly and tumultuous task, working with partners with knowledge branching into different technologies may not be a beneficial idea as well. Enterprises may need a partner with a focused understanding of the cloud migration niche as they will have assimilated knowledge from their engagement with various customers.

Continuous Effort: Cloud is ever-changing with new developments and evolving paradigms. Thus, cloud migration is not a one-time task but rather requires continuous effort to automate and innovate accordingly.

Solutions to Cloud Migration Challenges
Some of the potential solutions that an enterprise can adopt to overcome common challenges of cloud migration are:

  • Reassessing cloud business & IT plans
  • Identify and remediate risks and gaps in data, compliance, and tech stack
  • Detailed migration approaches with self-sufficient virtual ecosystems
  • Helps build, deliver and fail fast
  • Data-driven analysis enables stakeholders to make quick and effective decisions

Planning the solutions requires extensive experience and knowledge to implement. They can reap the benefits of the cloud easily with the combination of the right approach and solution.

How Relevance Lab Helps Businesses Accelerate their Cloud Journey
Relevance Lab (RL) is a specialist company in helping customers adopt cloud “The Right Way”. It covers the full lifecycle of migration, governance, security, monitoring, ITSM integration, app modernization, and DevOps maturity. We leverage a combination of services and products for cloud adoption. Helping customers on a “Plan-Build-Run” transformation, we drive greater velocity of product innovation, global deployment scale, and cost optimization.

Building Mature Cloud Journey
Moving to the cloud opens up numerous opportunities for enterprises. To reap all the benefits of cloud migration, enterprises need a comprehensive strategy focused on building value, speed, resilience, scalability, and agility to optimize business outcomes. Having worked with businesses across the globe for over a decade, our teams have seen a common trend that enterprises are often unaware of unprecedented adoption challenges, the “day-after” surprises and complexities, or the chronology of their occurrence.

This begs the question – how enterprises can overcome such surprises? Relevance Lab helps you answer it with a comprehensive and integrated approach. Combining cloud expertise and experience, we help enterprises overcome any challenge or surprise coming their way. Meeting the current needs of the clients, we help you build a cohesive and well-structured journey. Here are a few ways Relevance Lab helps you achieve it:

1. Assess the Current State & Maturity of the Cloud Journey
Any enterprise must get a clear picture of its current state before they build a cloud strategy. At Relevance Lab, we help clients assess their structures and requirements to identify their current stage on the cloud maturity journey. The cloud maturity model has 5 stages, namely, Legacy, Cloud Ready, Cloud Friendly, Cloud Resilient, and Cloud Native, as shown in the image below. This helps us to adopt the right approach that matches the exact needs of our clients.


Once the current stage is determined after an assessment, RL helps in designing an effective cloud strategy with a comprehensive and integrated approach keeping a balance between cloud adoption and application modernization. We ensure that all elements of cloud adoption move together, i.e, cloud engineering, cloud security, cloud governance & operating model, application strategy, engineering & DevSecOps, and Cloud Architecture, as shown in the image below.


2. Execute & Deliver through a Cross-Functional Collaboration and Gating Process
After the approach is defined and the strategy is designed, workstreams that integrate people, tools, and processes are identified. Cloud adoption excellence is delivered through cross-functional collaboration and gating across workstreams and stages, as shown in the image below.


How We Helped a Publishing Major Migrate “The Right Way”
Let’s explore a detailed account of how we implemented them for a global publishing major to maximize cloud benefits.

The publishing major was heavily reliant on complex legacy applications and outdated tech stack resulting in security & legal liabilities. There was a pressing need to scale IT and Product engineering to meet market demands driven by usage uptick (triggered by pandemic). Another immediate requirement was the need for better data gathering & analytics to enable faster decision making.

Relevance Lab provided an enterprise cloud migration solution with a data-driven plan and collaboration with business stakeholders. A comprehensive framework prioritizing customer-centric applications for scale and security was put in place. RL helped in implementing an integrated approach leveraging cloud-first and secure engineering & deployment practices along with automation to accelerate development, deployment, testing & operations.


To further learn about the details of how RL helped the above global publishing giant, download our case study.

Conclusion
Given the current times, cloud adoption strategy requires a data-backed understanding of the current systems and logical next steps, ensuring business runs as usual. There are many challenges that an enterprise may face throughout its cloud journey. Most of these may come as surprise as teams often are unaware of the chronological order in which the complexities occur.

Relevance Lab, an AWS partner, has an integrated approach and offerings developed through years of experience in delivering successful cloud journeys to clients across all industries and regions. Like the global publishing major discussed in the blog, we have helped clients significantly reduce costs by implementing modernizations backstage parallelly while their businesses run as usual.

To know more about cloud migration or implement the same for your enterprise, feel free to connect with marketing@relevancelab.com

References:
Cloud Management, Automation, DevOps and AIOps – Key Offerings from Relevance Lab
Relevance Lab Playbooks for Frictionless IT and Business Operations
Leveraging Technology + Consulting Specialization for Products and Solutions



0

2022 Blog, Blog, Featured

With the growing demand for moving to the cloud, organizations also face various challenges, such as the ability to track costs, health, security, and assets at application levels. Having this ability can help organizations get a clear picture of their business metrics (revenue, transaction costs, customer-specific costs, etc.). Some of the other challenges that they face are as follows:

  • No clear definition of what is a service or application. The concept keeps changing from customer to customer based on the business’s criticality and need.
  • Separation of business applications from Internal applications or software services.
  • Deployment of applications across accounts and regions makes consolidation harder.
  • Dependent services and microservice concepts complicate the discovery process.
  • Complex setup involving clustered and containerized deployments promoting service-oriented architecture.
  • What is the target business/efficiency goal? Is it tracking cost, better diagnostics, or CMDB? What is linking to business Unit or Application level spend tracking?

Modeling a Common Customer Use Case


A typical large enterprise goes through a maturity journey from a scattered Infrastructure Asset Management to a more matured Application Asset Management.

Need for Automated Application Service Mapping
Applications are common focal points related to business units and business services that are highlighted by the customers.

  • It is important to track the cost and expenses at the application level for chargebacks. This requires an asset and cost-driven architecture. There is no common way to automate the discovery of such applications unless defined by Customers and linked to their infrastructure.
  • Business endpoint applications are served as a combination of assets and services
  • Knowing such dynamic topology can help with better monitoring, diagnostics, and capacity planning
  • There is a way to discover the infrastructure linked to templates and a service registry, but no easy way to roll that to an application linking

RLCatalyst AppInsights Solution
RLCatalyst AppInsights helps enterprises understand their current state of maturity by defining the global application master and linkage to business units. This is done using the discovery process to link applications, assets, and costs as a one-time activity. In this process, assets are categorized into two categories – allocated or mapped assets (i.e., assets linked to templates) and unallocated assets (i.e., assets that are not linked to any templates).


As shown in the above picture of the discovery process, all assets across your AWS accounts are brought into ServiceNow asset tables using Service Management Connector. Once done using RLCatalyst AppInsights, all assets are demarcated with assets linked to templates and the ones that do not have templates (unallocated assets). At this stage, we have cost allocations across assets linked to templates and unallocated assets. The next step is linking the templates to applications creating a mapping between applications and business units.

Similarly, for all the unallocated assets, we can look at either linking them to newly created templates or linking them to a project and terminating/cleaning up the same. Once you have all this in place, all the data would automatically build your dashboard in terms of cost by applications, Projects, BU, and unallocated costs.

For any new application deployment and infrastructure setup, it would follow the standard process to ensure assets are provisioned through templates, and appropriate taggings are enabled. This is enforced using guardrails for ongoing operations.


As shown above, the plan is to have an updated version of AppInsights V2.0 on ServiceNow store by the end of 2022, which will include the following additional features.

  • Automated Application Service Discovery (AASD)
  • Cross account Applications Cost tracking
  • Support for Non-CFT based applications like Terraform
  • Security and Compliance scores at an account level
  • Support for AppRegistry 2.0

AWS Standard Products and Offerings in This Segment
AWS provides some key products and building blocks that are leveraged in the AppInsights solution.


Summary
Managing your cloud with an Application-Centric Lens can provide effective data analysis, insights, and controls that better align with how large enterprises track their business and Key Performance Indicators (KPIs). Traditionally, the cloud has provided a very Infrastructure-centric and fragmented view that does not allow for actionable insights. This problem is now solved by Relevance Lab AppInsights 2.0.

To learn more about building cloud maturity through an Application-centric view or want to get started with RLCatalyst AppInsights, feel free to contact marketing@relevancelab.com

References
Governance 360 – Are you using your AWS Cloud “The Right Way”
ServiceNow CMDB
Increase application visibility and governance using AWS Service Catalog AppRegistry
AWS Security Governance for Enterprises “The Right Way”
Configuration Management in Cloud Environments



0

2022 Blog, DevOps Blog, Research Gateway, Blog, Featured

Automated deployment of software makes the process faster, easier, repeatable, and more supportable. A variety of technologies are available for deployment, but you need not necessarily choose a complex automation approach to reap the benefits. In this blog, we will cover how Relevance Lab approached using automation for the deployment of their RLCatalyst Research Gateway solution.

RLCatalyst Research Gateway solution from Relevance Lab provides a next-generation cloud-based platform for collaborative scientific research on AWS with access to research tools, data sets, processing pipelines, and analytics workbenches in a frictionless manner. The solution can be used in the Software as a Service (SaaS) mode, or it can be deployed in customers’ accounts in the enterprise mode. It takes less than 30 minutes to launch a working environment for Principal Investigators and Researchers with security, scalability, and cost governance.


During the deployment of this solution, several AWS resources are created:

  • Networking (VPC, Public and private subnets, Internet and NAT Gateways, ALB)
  • Security (Security Groups, Cognito Userpool for authentication, Identity and Acess Management (IAM) Roles and Policies)
  • Database (AWS DocumentDB cluster)
  • EC2 Compute
  • EC2 Image Builder pipelines
  • S3 Buckets (storage)
  • AWS Service Catalog products and portfolios

When such a variety of resources are to be created, there are several benefits of automating the deployment.

  • Faster Deployment: It takes an engineer at least a few hours to deploy all the resources manually, assuming everything works to plan. If errors are encountered, it takes longer. With an automated deployment, the process is much quicker, and it can be done in 15-30 minutes.
  • Easier: The deployment automation encapsulates and hides a lot of the complexity of the process, and the engineer performing the task does not need to know a lot of the different technologies in depth. Also, since the automation has been hardened over time through repeated testing in the lab, much of the error handling has been codified within the scripts.
  • Repeatable: The deployment done via automation always comes out exactly as designed. Unlike manual deployment, where unforced user errors can creep in, the scripts perform each run exactly the same. Also, scripts can be coded to fix broken installs or redeploy solution software.
  • Supportable: Automation scripts can have logging, which makes it easy for support personnel to help in case things don’t go as planned.

There are many technologies that can help automate the deployment of software. These include tools like Chef and Ansible, language-specific package managers like PyPI or npm, and Infrastructure as Code (IaC) tools like CloudFormation or Terraform. For RLCatalyst Research Gateway, which is built on AWS, we picked CloudFormation Templates (CFT) for our IaC needs in combination with plain old shell scripts. Find our deployment scripts on Github.


  • Pre-requisites: We deploy Research Gateway in a standard Virtual Private Cloud (VPC) architecture with both public and private subnets. This VPC can be created using a quickstart available from AWS itself.
  • Infrastructure: The infrastructure is created as five different stacks.
    • Amazon S3 bucket: This is used to hold all the deployment artifacts like CFT templates.
    • AWS Cognito UserPool: This is used for authentication.
    • AWS DocumentDB: This is used to store all persistent data required by Research Gateway.
    • Amazon EC2 Image Builder: Pipelines are created to rebuild Amazon Machine Image (AMI) for the standard catalog items that are AMI-based. This ensures that the AMIs have the latest patches and security fixes.
    • Amazon EC2 (main stack): This hosts the Research Gateway portal.
  • Configuration: Some of the instance-specific data is part of the configuration, which is stored in one of the following ways.
    • Files: Configuration files are created during the deployment process, using data provided at the time. These files are referred by the solution software to customize its behavior. File-based configurations are easier to access for support personnel and can be easily checked in case the solution software is not behaving as expected.
    • Database Entries: A configs collection in the database hosts some of the information. Ideally, all configurations can reside in the database, but because the database is encrypted and has restricted access, we prefer to keep some of the configurations outside the DB.
    • AWS Systems Manager (SSM) Parameter Store: Some configurations, especially those related to AMIs, which are resolved by CFTs at run-time, are maintained in the AWS SSM Parameter store.
  • Research Gateway Solution Software: Distributed as docker images via AWS Elastic Container Registry (ECR). This allows us to distribute the solution software privately to the customers’ AWS accounts. Our solution software runs as a set of docker services. A variation of the deployment script can also deploy this as services into AWS Elastic Kubernetes Service.
  • Load-balancing: The EC2 instances deployed register themselves with Target Groups, and an Application Load Balancer serves the application securely over SSL using certificates hosted in AWS Certificate Manager.

Once the solution software is deployed, and the portal is running and reachable, the first user (an Admin role) is created using a script. Using that Administrator user credentials, the rest of the onboarding process can be completed by the customer from the UI.

Summary
Using the automated deployment process, an instance of the RLCatalyst Research Gateway can be provisioned and configured in less than 30 minutes. This allows customers to start using the solution quickly and derive maximum benefits from their investment with minimum effort.

If you would like to launch your scientific research environment in less than 30 minutes with RLCatalyst Research Gateway or would like to learn more about it, write to us at marketing@relevancelab.com.

References
Architecting a Cloud-based Application with AWS Best Practices
Enabling Frictionless Scientific Research in the Cloud with a 30 Minutes Countdown Now!



0

2022 Blog, Blog, Featured

In recent times, Next Generation Sequencing (NGS) has transformed from being solely a research tool to be routinely applied in many fields, including diagnostics, outbreak disease investigations, antimicrobial resistance, forensics, and food authenticity. The use of cloud and modern open source tools is helping advancement at a rapid pace, with continuous improvement in quality and cost reduction, and is having a major influence on food microbiology. Public health labs and food regulatory agencies globally are embracing Whole Genome Sequencing (WGS) as a revolutionary new method. In this blog, we try to introduce this interesting use case and cover a common use case of Bacterial Genome Analysis in the cloud using our Research Gateway. We will show how to run powerful tools like Bactopia, a Flexible Pipeline for Complete Analysis of Bacterial Genomes in a few minutes.

What is Bactopia?
Sequencing of bacterial genomes is gathering momentum for greater adoption. Bactopia, developed by Robert A. Petit III, was created with a new series of pipelines (acknowledgements) built using Nextflow workflow software to provide efficient comparative genomic analyses for bacterial species or genera. This pipeline has more advanced features than many others in a similar space.

The image below shows the High Level Components of Bactopia Pipeline.


What Makes Bactopia More Powerful Compared to Other Similar Solutions?
The following data shared by the authors of this pipeline highlights the key strengths.



Usually, for researchers to get started with setting up a secure environment, accessing the data, big compute, and analytics tools can be a significant effort. With Research Gateway built on AWS, we make it extremely simple to get started.

An Introduction to Running Bactopia on AWS Cloud
Bactopia is a software pipeline for the complete analysis of bacterial genomes. Bactopia is based on the Nextflow bioinformatic workflow software. Research Gateway supports Nextflow based pipelines to be run with great ease, and we will show you how the same can be achieved with Bactopia.

Steps for Running Bactopia Pipeline on AWS Cloud
Step-1: Using the publicly available Bactopia repository on Github, a new AWS AMI is created by installing Bactopia software on Nextflow advanced product available as part of Research Gateway standard catalog. This step is needed since Bactopia contains a large number of specialized tools integrated and embeds Nextflow internally for its execution. Once the new AMI of Bactopia is ready, it is added to AWS Service Catalog and imported into Research Gateway to be used by Researchers. The product is available in the standard products category to be launched with 1-Click, as shown below.


Step-2: Once the Bactopia product is ordered using a simple screen as shown above in about 10 minutes, the setup with all the tools, Nextflow & Nextflow Tower are all provisioned and ready to be used. The user can log in to the Bactopia server using the SSH key-pair available from within the Portal UI using the “SSH/RDP Connect” action, as shown below.

Step-3: Copy data to the Bactopia server based on samples to be used for processing and start the execution of workflow as per available documentation. In our case, we tried with a smaller set of sample data sets, and it took us 15 min to run the pipeline and view outputs in the console window.

Step-4: When the pipeline is being executed using Nextflow Tower, details of the jobs and all key metrics can be viewed by the user from within the Research Gateway by selecting the “Monitor Pipeline” action. The entire complexity of different tools integrated on the platform is invisible to the user making it a seamless experience.

Step-5: The outputs generated by the Bactopia pipeline can be viewed from within the Portal using the “View Outputs” action that allows users to view the outputs in a simple browser, and actions can be taken to view the same with specialized tools like Integrative Genomics Viewer (IGV) or MultiQC reports, etc.

All the products that are used in Research Gateway have automatic tagging and tracking for cost purposes, and it can be easily verified by project, researchers, and product type on the total consumption providing a powerful cost management and budget tracking tool.

Summary
As the usage of Genomics adoption grows and new use cases emerge for leveraging the power of this technology, focus on food safety is a growing need with an ability to Bacterial Genome analysis using advanced pipelines popularly available in the open source community. To help researchers use such power tools with speed on the cloud without getting into the complexity of infrastructure, networks, security, and focus on science, we have demonstrated in this blog the ability to use Research Gateway to run your first pipeline in less than 60 minutes.

To know more about how you can start your Bacterial Genome analysis pipelines on the AWS Cloud in less than 60 minutes using our solution at https://research.rlcatalyst.com, feel free to contact marketing@relevancelab.com

References
An introduction to running Bactopia on Amazon Web Services (May 2021)
Using AWS Batch to process 67,000 genomes with Bactopia (December 2020)
Accelerating Genomics and High Performance Computing on AWS with Relevance Lab Research Gateway Solution



0

2022 Blog, Blog, Featured, Feature Blog

Research Gateway SaaS solution from Relevance Lab provides a next-generation cloud-based platform for collaborative scientific research on AWS with access to research tools, data sets, processing pipelines, and analytics workbenches in a frictionless manner. It takes less than 30 minutes to launch a “MyResearchCloud” working environment for Principal Investigators and Researchers with security, scalability, and cost governance. Using the Software as a Service (SaaS) model is a preferable option for consuming functionality but in the area of scientific research, it is equally critical to have tight control on data security, privacy, and regulatory compliances.

One of the growing needs from customers is to use the solution for their online training needs and specialized use cases on Bioinformatics courses. With the pandemic, there is tremendous new interest in students to pursue life sciences courses and specialize in Bioinformatics streams. At the same time, education institutions are struggling to move their internal Training Labs infrastructure from data centers to the cloud. As an AWS specialized partner for Higher Education, we are working with a number of universities to understand their needs better and provide solutions to address the same in an easy + cost-effective manner.

The Top-5 use cases shared by customers to set up their Virtual Cloud Labs for courses like Bioinformatics are the following:


  • Enterprise Needs: Ability to move from Data Center based physical labs to cloud-based Virtual Labs using their Corporate Cloud accounts easily without compromising on security, tight cost controls, and a self-service portal for Instructors and Students. Enterprise-grade controls on Budget, Students/Instructors Access, Data Security, and Approved Products Catalog.
  • Business Needs: The setup of a New Virtual Training Lab should support the key learning and research needs of the students.
    • Programs available to provide labs access to students based on calendar programs for the duration of the full semester.
    • Longer-term projects and programs accessible for labs based on research grants and associated budgets/time constraints.
  • IT Department Needs: From University Corporate IT to be able to allow specific departments (like Bioinformatics) to have their own Programs and Projects with self-service without compromising on Enterprise Security and Compliance Needs.
  • Curriculum Department Needs: From different Department Heads (like Bioinformatics) and Instructors be able to define learning curriculum and associated training programs with access to Classroom and Research Labs. Departments also need tight control on budgets and student access management.
  • Student Needs: The ability for students to access cloud-based Training Labs is a very easy and simple manner without requiring deep access to cloud knowledge. Also having pre-build solutions for basic needs covering Analytics Tools like RStudio/Jupyter, access to secure data repositories, open-source tools/containers access, and collaboration portal.

The following picture describes the basic organization and roles setup in a university.



To balance the needs of speed with compliance, we have designed a unique model to allow Universities to “Bring your own License” while leveraging the benefits of SaaS in a unique hybrid approach. Our solution provides a “Gateway Model” of Hub-n-Spoke design where we provide and operate the “Hub” while enabling universities and their departments to connect their own AWS Research accounts as a “Spoke” and get started within 30 min with full access to a complete Classroom Toolkit. A sample of out-of-the-box Bioinformatics Lab tools available as a standard catalog is shown below.


Professors can add more tools to the standard catalog by importing their own AMIs using AWS Service Catalog. It is also very simple to create new course material and support additional tools using the base building blocks provided out-of-the-box.

Currently, it is not easy for universities, their IT staff, professors, students, and research groups to leverage the cloud easily for their scientific research. There are constraints with on-premise data centers and these institutions have access to Cloud accounts. However converting a basic account to a secure network, secure access, ability to create & publish product/tools catalog, ingress & egress of data, sharing of analysis, enforce tight budget control are non-trivial tasks that divert attention away from education to infrastructure.

Based on our discussions with stakeholders it was clear that the users want something that is as easy to consume as other consumer-oriented activities like e-shopping, consumer banking, etc. This led to the simplified process of creating a “My -Bioinformatics-Cloud-Lab” with the following basic needs:

1. A university can decide to sign up with Research Gateway (SaaS) to enable their different departments for using this software to enable online training and research needs. Such a university-level adoption is recommended to be an enterprise version of the software (hosted by us or by the university themselves) and used for different departments (called Organization or Business Units).
2. Another simpler way is to use our hosted version of Research Gateway by a particular department to create a tenant in Research Gateway with no overheads to maintain a university-specific deployment.
3. A Head of Department (HOD) can sign-up to create a new Tenant on Research Gateway and configure their own AWS Billing account to create Projects. Each Project can then invite other professors to be part of the online Training Labs. Projects can be aligned with semester-based classroom lab needs or can be part of ongoing research projects. Each project has a budget assigned along with associated professors and students, who have access to the project. The figure below shows typical department projects inside the portal.


4. Once the professor selects the project they can see standard “available products” in the Project. This project is used as a basic setup for a Training Lab. The figure below shows the sample screen for the available set of tools Professors can access by default. They can also add new products to the Lab Catalog.


For every Project (Lab) by default shared infrastructure is made available in the form of Project Storage, where curriculum-related data and information can be stored and made available to all students. Also, necessary security aspects for SSL connection, VPC, IAM roles, etc. are setup by default to make sure the Cloud Training Lab has a well-architected design.

5. A professor can control basic parameters for the Lab in terms of adding/deleting users, managing budgets, and also be able to take actions like “Pausing” a Project (no new products can be created while existing ones can be used) or “Stopping” the project (where all existing running machines are force stopped and no new ones can be created, however, data on the storage is accessible by students). The figure below shows how to manage project-level users and budget controls.


6. A professor can track the consumption of the lab resources by all users including professors and students as shown in the figure below.


7. Once a student logs into the project and accesses the lab resources, they can create their own workspaces like Rstudio and interact with the same from within the Portal. Once they are done with their work, they can stop the machine and log out to ensure no costs are being spent while the systems are not being used. When a researcher or student logs in, they can interact with active products and project storage as shown in the figure below.


8. The students can interact with their tools like RStudio from within the portal and connect to the same in a secure manner with a single click as shown in the figure below.


9. On Clicking the “Open Link” action, it allows access to an RStudio familiar environment for students to log in and learn as per their curriculum needs. The figure below shows the standard RStudio environment.


Summary
The new solution from Relevance Lab makes Scientific Research and Training in Cloud very easy for use cases like Bioinformatics. It provides flexibility, cost management, and secure collaborations to truly unlock the potential of the Cloud. For Higher Education Universities, this provides a fully functional Training Lab accessible by professors and students in less than 30 minutes.

If this seems exciting and you would like to know more or try this out do write to us at marketing@relevancelab.com

References
University in a Box – Mission with Speed
Leveraging AWS HPC for Accelerating Scientific Research on Cloud
Enabling Frictionless Scientific Research in the Cloud with a 30 Minutes Countdown Now!



0

2022 Blog, Research Gateway, Blog, Featured

As a researcher, do you want to get started in minutes to run any complex genomics pipeline with large data sets without worrying about hours to set up the environment, dealing with large data sets availability & storage, security of your cloud infrastructure, and most of all unknown expenses? RLCatalyst makes your life simpler, and in this blog, we will cover how easy it is to use publicly available Genomics pipelines from nf-co.re using Nextflow on your AWS Cloud environment with ease.

There are a number of open-source tools available for researchers driving re-use. However, what Research Institutions and Genomics companies are looking for is a right balance on three key dimensions before adopting cloud in a large scale manner for internal use:

  • Cost and Budget Governance: Strong focus on Cost Tracking of Cloud resources to track, analyze, control, and optimize budget spends.
  • Research Data & Tools Easy Collaboration: Principal Investigators and researchers need to focus on data management, governance, and privacy along with analysis and collaboration in real-time without worrying about Cloud complexity.
  • Security and Compliance: Research requires a strong focus on security and compliance covering Identity management, data privacy, audit trails, encryption, and access management.

To make sure the above functionalities do not slow down researchers from focussing on Science due to complexities of infrastructure, Research Gateway provides the reliable solution by automating cost & budget tracking with safe-guards and providing a simple self-service model for collaboration. We will demonstrate in this blog how researchers can use a vast set of publicly available tools, pipelines and data easily on this platform with tight budget controls. Here is a quick video of the ease with which researchers can get started in a frictionless manner.

nf-co.re is a community effort to collect a curated set of analysis pipelines built using Nextflow. The key aspects of these pipelines are that these pipelines adhere to strict guidelines that ensure they can be reused extensively. These pipelines have following advantages:


  • Cloud-Ready – Pipelines are tested on AWS after every release. You can even browse results live on the website and use outputs for your own benchmarking.
  • Portable and reproducible – Pipelines follow best practices to ensure maximum portability and reproducibility. The large community makes the pipelines exceptionally well tested and easy to run.
  • Packaged software – Pipeline dependencies are automatically downloaded and handled using Docker, Singularity, Conda, or others. No need for any software installations.
  • Stable releases – nf-core pipelines use GitHub releases to tag stable versions of the code and software, making pipeline runs totally reproducible.
  • CI testing – Every time a change is made to the pipeline code, nf-core pipelines use continuous integration testing to ensure that nothing has broken.
  • Documentation – Extensive documentation covering installation, usage, and description of output files ensures that you won’t be left in the dark.

Sample of commonly used pipelines that are supported out-of-box in Research Gateway to run with a few clicks and do important genomic analysis. While publicly available repos are easily accessible, it also allows private repositories and custom pipelines to run with ease.


Pipeline Name Description Commonly used for
Sarek Analysis pipeline to detect germline or somatic variants (pre-processing, variant calling, and annotation) from Whole Genome Sequencing (WGS) / targeted sequencing Variant Analysis – workflow designed to detect variants on whole genome or targeted sequencing data
RNA-Seq RNA-Sequencing analysis pipeline using STAR, RSEM, HISAT2, or Salmon with gene/isoform counts and extensive quality control Common basic analysis for RNA-Sequencing with a reference genome and annotation
Dual RNA-Seq Analysis of Dual RNA-Seq data – an experimental method for interrogating host-pathogen interactions through simultaneous RNA-Seq Specifically used for the analysis of Dual RNA-Seq data, interrogating host-pathogen interactions through simultaneous RNA-Seq
Bactopia Bactopia is a flexible pipeline for complete analysis of bacterial genomes Bacterial Genomic Analysis with focus on Food Safety
Viralrecon Assembly and intrahost/low-frequency variant calling for viral samples Supports metagenomics and amplicon sequencing data derived from the Illumina sequencing platform

*The above samples can be launched in less than 5 min and take less than $5 to run with test data and 80% productivity gains achieved.

The figure below shows the building block of this solution on AWS Cloud.


Steps for running nf-core pipeline with Nextflow on AWS Cloud


Steps Details Time Taken
1. Log into RLCatalyst Research Gateway as a Principal Investigator or Researcher profile. Select the project for running Genomics Pipelines, and first time create a new Nextflow Advanced Product. 5 min
2. Select the Input Data location, output data location, pipeline to run (from nf-co.re), and provide parameters (container path, data pattern to use, etc.). Default parameters are already suggested for use of AWS Batch with Spot instances and all other AWS complexities abstracted from end-user for simplicity. 5 min to provision new Nextflow & Nextflow Tower Server on AWS with AWS Batch setup completed with 1-Click
3. Execute Pipeline (using UI interface or by SSH into Head-node) on Nextflow Server. There is ability to run the new pipelines, monitor status, and review outputs from within the Portal UI. Pipelines can take some time to run depending on the size of data and complexity
4. Monitor live pipelines with the 1-Click launch of Nextflow Tower integrated with the portal. Also, view outputs of the pipeline in outputs S3 bucket from within the Portal. Use specialized tools like MultiQC, IGV, and RStudio for further analysis. 5 min
5. All costs related to User, Product, and Pipelines are automatically tagged and can be viewed in the Budgets screen to know the Cloud spend for pipeline execution that includes all resources, including AWS Batch HPC instances dynamically provisioned. Once the pipelines are executed, the existing Cromwell Server can be stopped or terminated to reduce ongoing costs. 5 min

The figure below shows the Nextflow Architecture on AWS.


Summary
nf-co.re community is constantly striving to make Genomics Research in the Cloud simpler. While these pipelines are easily available, running them on AWS Cloud with proper cost tracking, collaboration, data management, and integrated workbench were missing that is now solved by Research Gateway. Relevance Lab, in partnership with AWS, has addressed this need with their Genomics Cloud solution to make scientific research frictionless.

To know more about how you can start your Nextflow nf-co.re pipelines on the AWS Cloud in 30 minutes using our solution at https://research.rlcatalyst.com, feel free to contact marketing@relevancelab.com

References
Enabling Researchers with Next-Generation Sequencing (NGS) Leveraging Nextflow and AWS
Pipelining GATK with WDL and Cromwell on AWS Cloud
Genomics Cloud on AWS with RLCatalyst Research Gateway
Health Informatics and Genomics on AWS with RLCatalyst Research Gateway
Accelerating Genomics and High Performance Computing on AWS with Relevance Lab Research Gateway Solution



0

2022 Blog, SWB Blog, Blog, Featured

Relevance Lab launches its professional services for Service Workbench on AWS (SWB) available for customers through AWS Marketplace. SWB is a cloud-based open-source solution that caters the needs of the scientific research community by empowering both researchers & research IT teams.

Relevance Lab is a preferred partner for SWB to help customers adopt this open-source solution seamlessly. We have deep expertise and can help in assessment, planning, deployment, training, customization and ongoing managed services support in a cost effective manner.

Highlights of Professional Services Offering

  • Service Workbench on AWS which is an the open-source solution is fully supported with deep competence to help Plan-Build-Run lifecycle
  • Provide assessment, planning, deployment, training, customization and ongoing managed services support
  • Offer cost-effective and flexible engagement models

With Relevance Lab’s professional services for SWB, IT teams are able to deliver secure, repeatable, and federated access control to data, tooling, and compute power to researchers driving a frictionless scientific research on cloud.

Key Offerings

  • Assessment, Implementation and Training for new and existing setup
  • Advanced Setup & Premium Support including underlying infrastructure with special needs on Security, Compliance, Data Protection and Scalability
  • Ongoing Managed Services & Support including Upgrades, Monitoring and Incident Management
  • SWB Code and new feature customization, enhancement services for custom catalog like RStudio on ALB

What it Means for Scientific Researcher Community?
Relevance Lab’s Professional Services Offering for Service Workbench on AWS is a solution that enables IT teams to provide secure, repeatable, and federated control of access to data, tooling, and compute power that researchers need. With Service Workbench, researchers no longer have to worry about navigating cloud infrastructure. They can focus on achieving research missions and completing essential work in minutes, not months, in configured research environments.

Frequently Asked Questions

Question-1  How to get started using SWB and RStudio with ALB?
Answer:  We have a dedicated landing page, sign-up page and support model

Question-2  What is a typical customer end-to-end journey?
Answer:  Most customers look for the following support for the adoption lifecycle.

  • One time on-boarding
  • Product customization services
  • On-Going managed services and support
  • T&M services for anything additional

Question-3  How long does onboarding take, and what does it cost?
Answer:  A standard onboarding for a new customer takes about 2 weeks covering initial assessment, installation, configurations, training, and basic functionality demonstration for a new setup. It costs about US $10,000.

Question-4  What sort of support is available post onboarding?
Answer:  Following are the common support activities requested:

  • L0 – Monitoring and Diagnostics
  • L1 – Technical Queries on how to use the product effectively
  • L2 – Ongoing upgrades, troubleshooting, configurations
  • L3 – Customization, enhancements (typically for less than 40-hour changes per request)
  • Project Engagement – for typically 40+ hours of enhancements/customization work

Question-5  What is the engagement model for ongoing support or customizations?
Answer:  Two models of support are offered – Basic and Premium. In case of customizations, both models of project-based and Time & Material engagement are possible.

Looking Ahead
SWB is available as an open-source solution and provides useful functionality to enable self-service portal for research customers. However, without a dedicated partner to support through the complete lifecycle, it can be a daunting exercise for customers and overheads for their internal IT teams. Based on the feedback from early adopters and in partnership with AWS, we are happy to launch specialized professional services on AWS Marketplace to make adoption by customers frictionless. Keeping the open source nature in mind, the services are optimized to be cost-effective and flexible with a goal to make scientific research in the cloud faster, cheaper and better.

To learn more about Relevance Lab’s professional services for Service Workbench, feel free to write to marketing@relevancelab.com

References
Relevance Lab Open-Source Collaboration with Service Workbench on AWS
Service Workbench Template on Github



0

2022 Blog, Research Gateway, Blog, Featured

Developed in the Data Sciences Platform at the Broad Institute, the Genome Analysis Toolkit (GATK) offers a wide variety of tools with a primary focus on variant discovery and genotyping. Relevance Lab is pleased to offer researchers the ability to run their GATK pipelines on AWS that was missing so far with our Genomics Cloud solution and a 1-click model.

GATK is making scientific research simpler for Genomics by providing best practices workflows and docker containers. The workflows are written in Workflow Description Language (WDL), a user-friendly scripting language maintained by the OpenWDL community. Cromwell is an open-source workflow execution engine that supports WDL as well as CWL, the Common Workflow Language, and can be run on a variety of different platforms, both local and cloud-based. RLCatalyst Research Gateway added support for the Cromwell engine that enables researchers to run any popular workflows on AWS seamlessly. Some of the popular workflows that are available for a quick start are the following:



The figure below shows the building block of this solution on AWS Cloud.


Steps for running GATK with WDL and Cromwell on AWS Cloud


Steps Details Time Taken
1. Log into RLCatalyst Research Gateway as a Principal Investigator or Researcher profile. Select the project for running Genomics Pipelines, and first time create a new Cromwell Advanced Product. 5 min
2. Select the Input Data location, output data location, pipeline to run (from GATK), and provide parameters (input.json). Default parameters are already suggested for the use of AWS Batch with Spot instances and all other AWS complexities, abstracted from the end-user, for simplicity. 5 min to provision new Cromwell Server on AWS with AWS Batch setup completed with 1-Click
3. Execute Pipeline (using UI interface or by SSH into Head-node) on Cromwell Server. There is ability to run the new pipelines, monitor status, and review outputs from within the Portal UI. Pipelines can take some time to run depending on size of data and complexity
4. View outputs of the Pipeline in Outputs S3 bucket from within the Portal. Use specialized tools like MultiQC, Integrative Genomics Viewer (IGV), and RStudio for further analysis. 5 min
5. All costs related to User, Product, and Pipelines are automatically tagged and can be viewed in the budgets screen to know the cloud spend for pipeline execution that consists of all resources, including AWS Batch HPC instances dynamically provisioned. Once the pipelines are executed, the existing Cromwell Server can be stopped or terminated to reduce ongoing costs. 5 min


The figure below shows the ability to select Cromwell Advanced to provision and run any pipeline.


The following picture shows the architecture of Cromwell on AWS.


Summary
GATK community is constantly striving to make Genomics Research in the cloud simpler. So far, the support for AWS Cloud was still missing and was a key ask from multiple online research communities. Relevance Lab, in partnership with AWS, has addressed this need with their Genomics Cloud solution to make scientific research frictionless.

To know more about how you can start your GATK pipelines with WDL and Cromwell on the AWS Cloud in just 30 minutes using our solution at https://research.rlcatalyst.com, feel free to write to marketing@relevancelab.com

References
Accelerating Analytics for the Future of Genomics
Cromwell on AWS
Leveraging AWS HPC for Accelerating Scientific Research on Cloud
Cromwell Documentation
Artificial Intelligence, Machine Learning and Genomics
Accelerating Genomics and High Performance Computing on AWS with Relevance Lab Research Gateway Solution



0

2022 Blog, Research Gateway, Blog, Featured

The pandemic worldwide has highlighted the need for advancing human health faster and new drugs discovery advancement for precision medicines leveraging Genomics. We are building a Genomics Cloud on AWS leveraging convergence of Big Compute, Large Data Sets, AI/ML Analytics engines, and high-performance workflows to make drug discovery more efficient, combining cloud & open source with our products.

Relevance Lab (RL) has been collaborating with AWS Partnership teams over the last one year to create Genomics Cloud. This is one of the dominant use cases for scientific research in the cloud, driven by healthcare and life sciences groups exploring ways to make Genomics analysis better, faster, and cheaper so that researchers can focus on science and not complex infrastructure.

RL offers a product RLCatalyst Research Gateway that facilitates Scientific Research with easier access to big compute infrastructure, large data sets, powerful analytics tools, a secure research environment, and the ability to drive self-service research with tight cost and budget controls.

The top use cases for AWS Genomics in the Cloud are implemented by this product and provide an out-of-the-box solution, significantly saving cost and effort for customers.


Key Building Blocks for Genomics Cloud Architecture
The solution for supporting easy use of Genomics Cloud supports the following key components to meet the need of researchers, scientists, developers, and analysts to efficiently run their experiments without the need for deep expertise in the backend computing capabilities.

Genomics Pipeline Processing Engine
The researchers’ community uses popular open-source tools like NextFlow and Cromwell for large data sets by leveraging HPC systems, and the orchestration layer is managed by tools like Nextflow and Cromwell.

Nextflow is a bioinformatics workflow manager that enables the development of portable and reproducible workflows. It supports deploying workflows on a variety of execution platforms, including local, HPC schedulers, AWS Batch, Google Cloud Life Sciences, and Kubernetes.

Cromwell is a workflow execution engine that simplifies the orchestration of computing tasks needed for Genomics analysis. Cromwell enables Genomics researchers, scientists, developers, and analysts to efficiently run their experiments without the need for deep expertise in the backend computing capabilities.

Many organizations also use commercial tools like Illumina DRAGEN and NVidia Parabricks for similar solutions that are more optimized in reducing processing timelines but also come with a price.

Open Source Repositories for Common Genomics Workflows
The solution needs to allow researchers to leverage work done by different communities and tools to reuse existing available workflows and containers easily. Researchers can leverage any of the existing pipelines & containers or can also create their own implementations by leveraging existing standards.

GATK4 is a Genome Analysis Toolkit for Variant Discovery in High-Throughput Sequencing Data. Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools with a primary focus on variant discovery and genotyping. Its powerful processing engine and high-performance computing features make it capable of taking on projects of any size.

BioContainers – A community-driven project to create and manage bioinformatics software containers.

Dockstore – Dockstore is a free and open source platform for sharing reusable and scalable analytical tools and workflows. It’s developed by the Cancer Genome COLLABORATORY and used by the GA4GH.

nf-core Pipelines – A community effort to collect a curated set of analysis pipelines built using Nextflow.

Workflow Description Language (WDL) is a way to specify data processing workflows with a human-readable and -writeable syntax.

AWS Batch for High Performance Computing
AWS has many services that can be used for Genomics. In this solution, the core architecture is with AWS Batch, a managed service that is built on top of other AWS services, such as Amazon EC2 and Amazon Elastic Container Service (ECS). Also, proper security is provided with Roles via AWS Identity and Access Management (IAM), a service that helps you control who is authenticated (signed in) and authorized (has permissions) to use AWS resources.

Large Data Sets Storage and Access to Open Data Sets
AWS cloud is leveraged to deal with the needs of large data sets for storage, processing, and analytics using the following key products.

Amazon S3 for high-throughput data ingestion, cost-effective storage options, secure access, and efficient searching

AWS DataSync for secure, online service that automates and accelerates moving data between on premises and AWS storage services

AWS Open Datasets Program houses openly available, with 40+ open Life Sciences data repositories

Outputs Analysis and Monitoring Tools
One of the key building blocks for Genomic Data Analysis needs access to common tools like the following integrated into the solution.

MultiQC reports MultiQC searches a given directory for analysis logs and compiles an HTML report. It’s a general-use tool, perfect for summarising the output from numerous bioinformatics tools.

IGV (Integrative Genomics Viewer) is a high-performance, easy-to-use, interactive tool for the visual exploration of genomic data.

RStudio for Genomics since R is one of the most widely-used and powerful programming languages in bioinformatics. R especially shines where a variety of statistical tools are required (e.g., RNA-Seq, population Genomics, etc.) and in the generation of publication-quality graphs and figures.

Genomics Data Lake
AWS Data Lake for creating Genomics data lake for tertiary processing. Once the Secondary analysis generates outputs typically in Variant Calling Format (VCF) for further analysis, there is a need to move such data into a Genomics Data Lake for tertiary processing. Leveraging standard AWS tools and solution framework, a Genomics Data Lake is implemented and integrated with the end-to-end sequencing processing pipeline.

Variant Calling Format specification is used in bioinformatics for storing gene sequence variations, typically in a compressed text file. According to the VCF specification, a VCF file has meta-information lines, a header line, and data lines. Compressed VCF files are indexed for fast data retrieval (random access) of variants from a range of positions.

VCF files, though popular in bioinformatics, are a mixed file type that includes a metadata header and a more structured table-like body. Converting VCF files into the Parquet format works excellently in distributed contexts like a Data Lake.

Cost Analysis of Workflows
One of the biggest concerns for users of Genomic Cloud is control on budget and cost that is provided by RLCatalyst Research Gateway by tracking spends across Projects, Researchers, Workflow runs at a granular level and allowing for optimizing spends by using techniques like Spot instances and on-demand compute. There are guardrails built-in for appropriate controls and corrective actions. Users can run sequencing workflows using their own AWS Accounts, allowing for transparent control and visibility.

Summary
To make large-scale genomic processing in the cloud easier for institutions, principal investigators, and researchers, we provide the fundamental building blocks for Genomics Cloud. The integrated product covers large data sets access, support for popular pipeline engines, access to open source pipelines & containers, AWS HPC environments, analytics tools, and cost tracking that takes away the pains of managing infrastructure, data, security, and costs to enable researchers to focus on science.

To know more about how you can start your Genomic Cloud in the AWS cloud in 30 minutes using our solution at https://research.rlcatalyst.com, feel free to contact marketing@relevancelab.com.

References
High-performance genetic datastore on AWS S3 using Parquet and Arrow
Parallelizing Genome Variant Analysis
Pipelining GATK with WDL and Cromwell
Accelerating Genomics and High Performance Computing on AWS with Relevance Lab Research Gateway Solution



0

PREVIOUS POSTSPage 1 of 11NO NEW POSTS