Your address will show here +12 34 56 78
2024 Blog, Blog, Featured

With growing complexity across Cloud Infrastructure, Applications deployments, and Data pipelines there is a critical need for effective Site Reliability Engineering (SRE) solutions for large enterprises. While this is a common need, the approach many companies seem to be attempting is solving the problem with multiple siloed efforts. Following are some common problems we have observed:

  1. Data Center oriented approach to solving SRE adoption with a focus on different layers across Storage, Compute, Network, Apps, and Databases. This approach creates silos with different teams working across the tracks.
  2. Significant focus on reactive models for Observability with multiple tools and overlapping monitoring coverage across on-prem & cloud systems. This creates alerts fatigue, false alarms, long diagnostics time, and slow recovery cycles.
  3. Long planning and analysis cycles on defining how to get started on “SRE Transformation Program” with multiple groups, approaches, and discovery cycles.
  4. Need for Organization clarity on who should drive SRE program across Infra, Apps, DevOps, Data and Service Delivery groups.
  5. Undefined roadmap for maturity and how to leverage Cloud, Automation, and AIOps to roll-out SRE programs at scale across the enterprise.
  6. Federated models of IT and Business Units with shared responsibility across global operations and how to balance the need for standardization vs self-service flexibility.
  7. Missing information on current problems and faults affecting end users with slow response times, surprise outages, unpredictable performance, and view on real time Business Performance Metrics (SLAs).
  8. Lack of mature Critical Incident Management and Incident Intelligence.
  9. Custom approach to solutions lacking ability to build a common framework and scale across different units.
  10. Need for Machine Learning Observability including data collection and alerting, additional data growth, data drift and consumption monitoring.
  11. Tracking Platform Cost visibility across business, regions, and projects.

With a growing Cloud footprint adoption, these issues have got amplified along with concerns on costs and security in the absence of mature SRE models slowing down digital transformation efforts.

To fix these issues in a prescriptive manner, Relevance Lab has worked with some large customers to evolve a “Platform Centric” model to SRE adoption. This leverages common tools and open-source technologies that can speed up SRE implementation by saving significant time, cost, and efforts. Also, with a rapid deployment model the rollout can be done across a global enterprise with Automation driven templates.

The figure below explains the Command Centre SRE Platform from Relevance Lab.



Building Blocks of SRE Platform

  1. Application Centric Design
    • The first step towards building a mature SRE implementation starts with an application centric view aligned to business services. By using platforms like ServiceNow, we can build relationships or service maps between Infrastructure and application services. This is crucial and helps during an outage in identification of root cause.
    • Once all assets are identified, segregated based on type of applications, business services tagged and managed centrally.

  2. Monitoring
    • The next step is to have monitoring sensors enabled for all business-critical systems. Enablement of monitoring sensors could vary based on the type of resources as mentioned below:
      • Systems Monitoring: This is typically Infrastructure and Network monitoring and could be enabled either using the native cloud services or using third party tools like AWS CloudWatch, Azure Monitor, Solarwinds, Zabbix etc.
      • Applications or Logs Monitoring: Application monitoring involves both performance monitoring as well as logs monitoring, this can also be achieved using the cloud native tools or third-party tools like AppDynamics, ELK, Splunk, AWS X-ray, Azure application insights etc.
      • Jobs Monitoring: For monitoring scheduled jobs, tools like NewRelic, Dynatrace, Control-M etc, are used.

  3. SRE Approach with Event Management
    • Now that the monitoring sensors are enabled, this will generate a lot of alerts and most of this would be noise including false alarms and duplicate alerts. Relevance Lab algorithms help de-duplication, alert aggregation, and alert correlation of these alerts and thereby reduce alert fatigue.
    • Golden Signals: The golden signals namely latency, traffic, errors, and saturation are defined, configured and setup for any abnormalities during this stage. By integrating these with the standard Incident Management and Problem Management process and ITSM Platforms, the application stability and reliability becomes matured over time.
    • Observability Dashboards: Having a single pane of glass view across your environment gives you visibility of your Business Apps. Relevance Lab SRE implementation involves below dashboard as a standard out of box.
      • Infrastructure Dashboard
      • Application Dashboard
      • Program Dashboard (Grafana)
      • Program Dashboard (ServiceNow)

The figure below shows the SRE Dashboard in detail.



How can new customers benefit from our SRE Platform?
In today’s fast-paced and technology-driven world, organizations need robust and efficient IT operations to stay ahead of the competition. Relevance Lab’s SRE solution provides the necessary tools and frameworks to unlock operational excellence, ensuring high availability, scalability, and reliability of critical business systems. With our SRE solution, organizations can focus on innovation and growth, confident in the knowledge that their IT infrastructure is well-managed and optimized for exceptional performance.

Summary
Relevance Lab is a specialist in SRE implementation and helps organizations achieve reliability and stability with SRE execution. While Enterprises can try and build some of these solutions, it is a time-consuming activity and error-prone and needs a specialist partner. We realize that each large enterprise has a different context-culture-constraint model covering organization structures, team skills/maturity, technology, and processes. Hence the right model for any organization will have to be created as a collaborative model, where Relevance Lab will act as an advisor to Plan, Build and Run the SRE model.

For more details, please feel free to reach out to marketing@relevancelab.com

References
Site Reliability Engineering Ensures Digital Transformation Promises are Delivered to End-Users
What is Site Reliability Engineering (SRE) – Google Definition?
Site reliability engineering documentation

0

2024 Blog, Blog, Featured

Our goal, at Relevance Lab (RL), is to make scientific research in the cloud ridiculously simple for researchers and principal investigators. Cloud is driving major advancements in both Healthcare and Higher Education sectors. Rapidly being adopted by various organizations across these sectors in both commercial and public sector segments, research on the cloud is improving day-to-day lives with drug discoveries, healthcare breakthroughs, innovation of sustainable solutions, development of smart and safe cities, etc.

Powering these innovations, public cloud provides an infrastructure with more accessible and useful research-specific products that speed time to insights. Customers get more secure and frictionless collaboration capabilities across large datasets. However, setting up and getting started with complex research workloads can be time-taking. Researchers often look for simple and efficient ways to run their workloads.

RL addresses this issue with Research Gateway, a self-service cloud portal that allows customers to run secure and scalable research on the public clouds without any heavy-lifting of set-ups. In this blog, we will explore different use cases that simplify their workloads and accelerate their outcomes with Research Gateway. We will also elaborate on two specific use cases from the healthcare and higher education sector for the adoption of Research Gateway Software as a Service (SaaS) model.

Who Needs Scientific Research in the Cloud?
The entire scientific community is trying to speed up research for better human lives. While scientists want to focus on “science” and not “infrastructure”, it is not always easy to have a collaborative, secure, self-service, cost-effective, and on-demand research environment. While most customers have traditionally used on-premise infrastructure for research, there is always a key constraint on scaling up with limited resources. Following are some common challenges we have heard our customers say:

  • We have tremendous growth of data for research and are not able to manage with existing on-premise storage.
  • Our ability to start new research programs despite securing grants is severely limited by a lack of scale with existing setups.
  • We have tried the cloud but especially with High Performance Computing (HPC) systems are not confident about total spends and budget controls to adopt the cloud.
  • We have ordered additional servers, but for months, we have been waiting for the hardware to be delivered.
  • We can easily try new cloud accounts but bringing together Large Datasets, Big Compute, Analytics Tools, and Orchestration workflows is a complex effort.
  • We have built on-premise systems for research with Slurm, Singularity Containers, Cromwell/Netflow, custom pipelines and do not have the bandwidth to migrate to the cloud with updated tools and architecture.
  • We want to provide researchers the ability to have their ephemeral research tools and environments with budget controls but do not know how to leverage the cloud.
  • We are scaling up online classrooms and training labs for a large set of students but do not know how to build secure and cost-effective self-service environments like on-premise training labs.
  • We are requiring a data portal for sharing research data across multiple institutions with the right governance and controls on the cloud.
  • We need an ability to run Genomics Secondary Analysis for multiple domains like Bacterial research and Precision Medicines at scale with cost-effective per sample runs without worrying about tools, infrastructure, software, and ongoing support.

Keeping the above common needs in perspective, Research Gateway is solving the problems for the following key customer segments:

  • Education Universities
  • Healthcare Providers
    • Hospitals and Academic Medical Centers for Genomics Research
  • Drug Discovery Companies
  • Not-for-Profit Companies
    • Primarily across health, education, and policy research
  • Public Sector Companies
    • Looking into Food Safety, National Supercomputing centers, etc.

The primary solutions these customers are seeking from Research Gateway have been mentioned below:

  1. Analytics Workbench with tools like RStudio and Sagemaker
  2. Bioinformatics Containers and Tools from the standard catalog and bring your own tools
  3. Genomics Secondary Analysis in Cloud with 1-Click models using open source orchestration engines like Nextflow, Cromwell and specialized tools like DRAGEN, Parabricks, and Sentieon
  4. Virtual Training Labs in Cloud
  5. High Performance Computing Infrastructure with specialized tools and large datasets
  6. Research and Collaboration Portal
  7. Training and Learning Quantum Computing

The figure below shows the customer segments and their top use cases.



How Research Gateway is Powering Frictionless Outcomes?
Research Gateway allows researchers to conduct just-in-time research with 1-click access to research-specific products, provision pipelines in a few steps, and take control of the budget. This helps in the acceleration of discoveries and enables a modern study environment with projects and virtual classrooms.

Case Study 1: Accelerating Virtual Cloud Labs for the Bioinformatics Department of Singapore-based Higher Education University
During interaction with the university, the following needs were highlighted to the RL team by the university’s bioinformatics department:

Classroom Needs: Primary use case to enable Student Classrooms and Groups for learning Analytics, Genomics Workloads, and Docker-based tools

Research Needs: Used by a small group of researchers pursuing higher degrees in Bioinformatics space

Addressing the Virtual Classroom and Research Needs with Research Gateway
The SaaS model of Research Gateway is used with a hub-and-spoke architecture that allows customers to configure their own AWS accounts for projects to control users, products, and budgets seamlessly.

The primary solution includes:

  • Professors set up classrooms and assign students for projects based on semester needs
  • Usage of basic tools like RStudio, EC2 with Docker, MySQL, Sagemaker
  • Special ask of forwarding and connecting port to shared data on cloud-based for local RStudio IDE was also successfully put to use
  • End-of-day automated reports to students and professors on server “still running” for cost optimization
  • Ability to create multiple projects in a single AWS Account + Region for flexibility
  • Ability to assign and enforce student-level budget controls to avoid overspending

Case Study 2: Driving Genomics Processing for Cancer Research of an Australian Academic Medical Center
While the existing research infrastructure is for on-premise setup due to security and privacy needs, the team is facing serious challenges with growing data and the influx of new genomics samples to be processed at scale. A team of researchers is taking the lead in evaluating AWS Cloud to solve the issues related to scale and drive faster research in the cloud with in-build security and governance guardrails.

Addressing Genomic Research Cloud Needs with Research Gateway
RL addressed the genomics workload migration needs of the hospital with the Research Gateway SaaS model using the hub-and-spoke architecture that allows the customer to have exclusive access to their data and research infrastructure by bringing their one AWS account. Also, the deployment of the software is in the Sydney region, complying with in-country data norms as per governance standards. Users can easily configure AWS accounts for genomics workload projects. They also get access to genomic research-related products in 1-click along with seamless budget tracking and pausing.

The following primary solution patterns were delivered:

  • Migration of existing HPC system using Slurm Workload Manager and Singularity Containers
  • Using Cromwell for Large Scale Genomic Samples Processing
  • Using complex pipelines with a mix of custom and public WDL pipelines like RNA-Seq
  • Large Sample and Reference Datasets
  • AWS Batch HPC leveraged for cost-effective and scalable computing
  • Specific Data and Security needs met with country-level data safeguards & compliance
  • Large set of custom tools and packages

The workload currently operates in an HPC environment on-premise, using slurm as the orchestrator and singularity containers. This involves converting singularity containers to docker containers so that they can be used with AWS Batch. The pipelines are based on Cromwell, which is one of the leading workflow orchestrator software available from the Broad Institute. The following picture shows the existing on-premise system and contrasts that with the target cloud-based system.



Case Study 3: Secure Research Environments for US based Academic Medical Centre
Secure Research Environments (SRE) provide researchers with timely and secure access to sensitive research data, computation systems, and common analytics tools for speeding up Scientific Research in the cloud. Researchers are given access to approved data, enabling them to collaborate, analyze data, share results within proper controls and audit trails. Research Gateway provides this secure data platform with analytical and orchestration tools to support researchers in conducting their work. Their results can then be exported safely, with proper workflows for submission reviews and approvals.

Addressing Secure Research Needs for Senstive Data with Ingress/Egress Controls
RL addressed the SRE needs for a US based Academic Medical Centre with HIPAA Compliant research for Health Sciences group. There are the following key building blocks for the solution:

  • Data Ingress/Egress
  • Researcher Workflows & Collaborations with costs controls
  • On-going Researcher Tools Updates
  • Software Patching & Security Upgrades
  • Healthcare (or other sensitive) Data Compliances
  • Security Monitoring, Audit Trail, Budget Controls, User Access & Management

The figure below shows implementation of SRE solution with Research Gateway.



Conclusion
Relevance Lab, in partnership with public cloud providers, is driving frictionless outcomes by enabling secure and scalable research leveraging Research Gateway for various use-cases. By simplifying the setting up and running research workloads in a seamless manner in just 30 minutes with self-service access and cost control, the solution enables creation of internal virtual labs, acceleration of complex genomic workloads and solving the needs of Secure Research Environments with Ingress/Egress controls.

To know more about virtual Cloud Analytics training labs and launching Genomics Research in less than 30 minutes explore the solution at https://research.rlcatalyst.com or feel free to write to marketing@relevancelab.com

References
Cloud Adoption for Scientific Research in a SAFE and Trusted Manner
Research Data Platform Enabling Scientific Research in Cloud with AWS Open-Source Solution
AWS Cloud Technology & Consulting Specialization for Products and Solutions
Health Informatics and Genomics on AWS with Research Gateway
UK Health Data Research Alliance – Aligning approach to Trusted Research Environments
Trusted (and Productive) Research Environments for Safe Research
Secure research environment for regulated data on Azure



0

2023 Blog, DevOps Blog, Blog, Featured

In today’s complex regulatory landscape, organizations across industries are required to comply with various regulations, including the Sarbanes-Oxley Act (SOX). SOX compliance ensures accountability and transparency in financial reporting, protecting investors and the integrity of the financial markets. However, manual compliance processes can be time-consuming, error-prone, and costly.

Relevance Lab’s RLCatalyst and RPA solutions provides a comprehensive suite of automation capabilities that can streamline and simplify the SOX compliance process. Organizations can achieve better quality, velocity, and ROI tracking, while saving significant time and effort.

SOX Compliance Dependencies on User Onboarding & Offboarding
Given the current situation while many employees are working from home or remote areas, there is an increased challenge of managing resources or time. Being relevant to the topic, on user provisioning, there are risks like, identification of unauthorized access to the system for individual users based on the roles or responsibility.

Most organization follow a defined process in user provisioning like, sending a user access request with relevant details including:

  • Username
  • User Type
  • Application
  • Roles
  • Obtaining line manager approval
  • Application owner approval

Based on the policy requirement and finally the IT providing an access grant. Several organizations have been still following a manual process, thereby causing a security risk.

In such a situation automation plays an important role. Automation has helped in reduction of manual work, labor cost, dependency/reliance of resource and time management. An automation process built with proper design, tools, and security reduces the risk of material misstatement, unauthorized access, fraudulent activity, and time management. Usage of ServiceNow has also helped in tracking and archiving of evidence (evidence repository) essential for Compliance. Effective Compliance results in better business performance.

RPA Solutions for SOX Compliance
Robotic process automation (RPA) is quickly becoming a requirement in every industry looking to eliminate repetitive, manual work through automation and behavior mimicry. This will reduce the company’s use of resources, save money and time, and improve the accuracy and standard of work being done. Many businesses are currently not taking use of the potential of deploying RPAs in the IT Compliance Process due to barriers including lack of knowledge, the absence of a standardized methodology, or carrying out these operations in a conventional manner.

Below are the areas which we need to focus on:

  • Standardization of Process: There are chances to standardize SOX compliance techniques, frameworks, controls, and processes even though every organization is diverse and uses different technology and processes. Around 30% of the environment in a typical organization may be deemed high-risk, whereas the remaining 70% is medium- to low-risk. To improve the efficiency of the compliance process, a large portion of the paperwork, testing, and reporting related to that 70 percent can be standardized. This would make it possible to concentrate more resources on high-risk locations.
  • Automation & Analytics: Opportunities to add robotic process automation (RPA), continuous control monitoring, analytics, and other technology grow as compliance processes become more mainstream. These prospective SOX automation technologies not only have the potential to increase productivity and save costs, but they also offer a new viewpoint on the compliance process by allowing businesses to gain insights from the data.


How Automation Can Reduce Compliance Costs?


  • Shortening the duration and effort needed to complete SOX compliance requirements: Many of the time-consuming and repetitive SOX compliance procedures, including data collection, reconciliation, and reporting, can be automated. This can free up your team to focus on more strategic and value-added activities.
  • Enhancing the precision and completeness of data related to SOX compliance: Automation can aid in enhancing the precision and thoroughness of SOX compliance data by lowering the possibility of human error. Automation can also aid in ensuring that information regarding SOX compliance is gathered and examined in a timely and consistent manner.
  • Recognizing and addressing SOX compliance concerns faster: By giving you access to real-time information about your organization’s controls and procedures, automation can help you detect and address SOX compliance concerns more rapidly. By doing this, you can prevent expensive and disruptive compliance failures.

Automating SOX Compliance using RLCatalyst:
Relevance Lab’s RLCatalyst platform provides a comprehensive suite of automation capabilities that can streamline and simplify the SOX compliance process. By leveraging RLCatalyst, organizations can achieve better quality, velocity, and ROI tracking, while saving significant time and effort.



  • Continuous Monitoring: RLCatalyst enables continuous monitoring of controls, ensuring that any deviations or non-compliance issues are identified in real-time. This proactive approach helps organizations stay ahead of potential compliance risks and take immediate corrective actions.
  • Documentation and Evidence Management: RLCatalyst’s automation capabilities facilitate the seamless documentation and management of evidence required for SOX compliance. This includes capturing screenshots, logs, and other relevant data, ensuring a clear audit trail for compliance purposes.
  • Workflow Automation: RLCatalyst’s workflow automation capabilities enable organizations to automate and streamline the entire compliance process, from control testing to remediation. This eliminates manual errors and ensures consistent adherence to compliance requirements.
  • Reporting and Analytics: RLCatalyst provides powerful reporting and analytics features that enable organizations to gain valuable insights into their compliance status. Customizable dashboards, real-time analytics, and automated reporting help stakeholders make data-driven decisions and meet compliance obligations more effectively.

Example – User Access Management


Risk Control Manual Automation
Unauthorized users are granted access to applicable logical access layers. Key financial data/programs are intentionally or unintentionally modified. New and modified user access to the software is approved by authorized approval as per the company IT policy. All access is appropriately provisioned. Access to the system is provided manually by IT team based on the approval given as per the IT policy and roles and responsibility requested.

SOD (Segregation Of Duties) check is performed manually by Process Owner/ Application owners as per the IT Policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.

BOT checks for SOD role conflict and provides the information to the Process Owner/Application owners as per the policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.
Unauthorized users are granted privileged rights. Key financial data/programs are intentionally or unintentionally modified. Privileged access, including administrator accounts and superuser accounts, are appropriately restricted from accessing the software. Access to the system is provided manually by the IT team based on the given approval as per the IT policy.

Manual validation check and approval to be provided by Process Owner/ Application owners on restricted access to the system as per IT company policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.

BOT can limit the count and time restriction of access to the system based on the configuration.
Unauthorized users are granted access to applicable logical access layers. Key financial data/programs are intentionally or unintentionally modified. Access requests to the application are properly reviewed and authorized by management User Access reports need to be extracted manually for access review by use of tools or help of IT.

Review comments need to be provided to IT for de-provisioning of access.
BOT can help the reviewer to extract the system generated report on the user.

BOT can help to compare active user listing with HR termination listing to identify terminated user.

BOT can be configured to de-provision access of user identified in the review report on unauthorized access.
Unauthorized users are granted access to applicable logical access layers if not timely removed. Terminated application user access rights are removed on a timely basis. System access is de-activated manually by IT team based on the approval provided as per the IT policy. System access can be deactivated by use of auto-provisioning script designed as per the company IT policy.

BOT can be configured to check the termination date of the user and de-active system access if SSO is enabled.

BOT can be configured to deactivate user access to the system based on approval.

The table provides a detailed comparison of the manual and automated approach. Automation can bring in 40-50% cost, reliability, and efficiency gains.

Conclusion
SOX compliance is a critical aspect of ensuring the integrity and transparency of financial reporting. By leveraging automation using RLCatalyst and RPA solutions from Relevance Lab, organizations can streamline their SOX compliance processes, reduce manual effort, and mitigate compliance risks. The combination of RLCatalyst’s automation capabilities and RPA solutions provides a comprehensive approach to achieving SOX compliance more efficiently and cost-effectively. The blog was enhanced using our own GenAI Bot to assist in creation.

For more details or enquires, please write to marketing@relevancelab.com

References
What is Compliance as Code?
What is SOX Compliance? 2023 Requirements, Controls and More
Building Bot Boundaries: RPA Controls in SOX Systems
Get Started with Building Your Automation Factory for Cloud
Compliance Requirements for Enterprise Automation (uipath.com)
Automating Compliance Audits|Automation Anywhere



0

2023 Blog, Blog, Featured

With the rise of Artificial intelligence (AI), many enterprises and existing customers are looking into ways to leverage this technology for their own development purposes and use cases. The field is rapidly attracting investments and efforts for adoption in an iterative manner starting with simple use cases to more complex business problems. In working with early customer, we have found the following themes as the first use cases for GenAI adoption in enterprise context:

  • Interactive Chatbots for simple Questions & Answers 
  • Enhanced Search with Natural Language Processing (NLP) using Document Repositories with data controls
  • Summarization of Enterprise Documents and Expert Advisor Tools

While OpenAI provides a model to build solutions, a number of early adopters are preferring use of Microsoft Azure OpenAI Service for better enterprise features.

Microsoft’s Azure OpenAI Service provides REST API access to OpenAI’s powerful language models the GPT-3.5, and Embeddings model series. Rest API is one of the ways to connect but Azure also provides .NET, Java, Javascript & Azure CLI to communication.

Introduction to GenAI

  1. What is Generative AI?
    • A class of artificial intelligence systems that can create new content, such as images, text, or videos, resembling human-generated data by learning patterns from existing data.
  2. What is the purpose?
    • To create new content or generate responses that are not based on predefined templates or fixed responses.
  3. How does it work?
    • Data is collected through various methods like scrapping and/or read documents/directories or indexes, then data is preprocessed to clean and format it for analysis. AI models, such as machine learning and deep learning algorithms, are trained on this preprocessed data to make predictions or classifications. By learning patterns from existing data and using that knowledge to produce new, original content through models.
  4. How can it be used by enterprises?
    • To assist end users (internal or external) in the form of next generation Chatbots.
    • To assist stakeholders with automating certain internal content creation processes.

Early Customer Adoption Experience
Customers wanted to experience GenAI for building awareness, validation of early use cases, and “testing the waters” with enterprise-grade security and governance for GenAI technology.

Early Use Cases Identified for Development
The primary focus area was in the content management space for enterprise data with focus on the following:

  1. End User Assistance (Chatbot)
    • Product Website Chatbot
    • Intranet Chatbot
  2. Content Creation
    • Document Summarization
    • Template based Document Generation
  3. SharePoint
    • Optical Character Recognition (OCR)
    • Cognitive Search
  4. Decision-making & Insights

Key Considerations for GenAI Leverage

  1. Limitations on current Chatbots
    • OCR
    • Closed chatbot allowing selection of pre-populated options
    • Limited scope and intelligence of responses
  2. Benefits expected from GenAI enhanced Chatbots
    • OCR
    • Human like responses
    • Ability to adapt quickly to new information
    • Multi-lingual
    • Restricts available data that Chatbot can draw from to verified Enterprise sites
  3. Potential Concerns
    • Can contain biases unintentionally learned by the model
    • Potential for errors and hallucinations

System Architecture
The system architecture using Azure Open AI takes advantage of several services provided by Azure.



The architecture may include the following components:

Azure Open AI Services
Azure Open AI Service is a comprehensive suite of AI-powered services and tools provided by Microsoft Azure. It offers a wide range of capabilities, including natural language processing, speech recognition, computer vision, and machine learning. With Azure Open AI Service, developers can easily integrate powerful AI models and APIs into their applications, enabling them to build intelligent and transformative solutions.

Azure Cognitive Services
Azure Cognitive Services offers a range of AI capabilities that can enhance Chatbot interactions. Services like Speech Services, Search Service, Vision Services and Knowledge Mining can be integrated to enable natural language understanding, speech recognition, and knowledge extraction.

Azure Storage
Azure Storage is a highly scalable and secure cloud storage solution offered by Microsoft Azure. It provides durable and highly available storage for various types of data, including files, blobs, queues, and tables. Azure Storage offers flexible options for storing and retrieving data, with built-in redundancy and encryption features to ensure data protection. It is a fundamental building block for storing and managing data in cloud-based applications.

Form Recognizer
Form Recognizer is a service provided by Azure Cognitive Services that uses machine learning to automatically extract information from structured and unstructured forms and documents. By analyzing documents such as invoices, receipts, or contracts, Form Recognizer can identify key fields and extract relevant data. This makes it easier to process and analyze large volumes of documents. It simplifies data entry and enables organizations to automate document processing workflows.

Service Account
A new service account would be required for team to establish connection with Azure services programmatically. The service account will need elevated privileges as needed for APIs to communicate with Azure services.

Azure API Management
Azure API Management provides a robust solution to address hurdles like throttling and monitoring. It facilitates the secure exposure of Azure OpenAI endpoints, ensuring their safeguarding, expeditiousness, and observability. Furthermore, it offers comprehensive support for the exploration, integration, and utilization of these APIs by both internal and external users.

Typical Interaction Steps between Components
The diagram below shows the typical interaction steps between different components.



  1. Microsoft Cognitive Search Engine indexes content from Document Repository as an Async Process.
  2. Using Frontend Application, the user interacts and sends query on Chatbot.
  3. The Azure API forwards the query to GPT Text Model that transforms the user query to an optimized Search Input.
  4. GPT Text Model returns this optimized Search Input to Azure API Orchestration Layer.
  5. API Layer sends Search Query to Cognitive Search.
  6. Cognitive Search returns the Relevant Content.
  7. API Layer sends the result from Cognitive Search with other details like Prompt, Chat context and history to GenAI for Response Generation.
  8. Generated and Summarized content is returned from GenAI.
  9. The meaningful results are shared back to user.

The above interactions clearly demonstrate that in the above architecture the documents remain inside the secure Azure network and are managed by Search engine. This ensured that the raw content is not being shared with OpenAI layer hence providing a controlled governance for data security and privacy.

Summary
Relevance Lab is working with early customers for GenAI Adoption using our AI Compass Framework. The customers’ needs vary from initial concept understanding to deploying with enterprise-grade guardrails and data privacy controls. Relevance Lab has already worked on 20+ GenAI BOTs across different architectures leveraging different LLM Models and Cloud providers with a reusable AI Compass Orchestration solution.

To know more about how we can help you adopt GenAI solutions “The Right Way” feel free to write to us at marketing@relevancelab.com and for a demonstration of the solution at AICompass@relevancelab.com

References
Revolutionize your Enterprise Data with ChatGPT
Augmenting Large Language Models with Verified Information Sources: Leveraging AWS
AWS SageMaker and OpenSearch for Knowledge-Driven Question Answering
What’s Azure Cognitive Search?



0

2023 Blog, AIOps Blog, Blog, Featured

As part of growing interest and attention on GenAI market trends, the priorities for enterprises in 2023 have rapidly shifted from tracking the trends to tremendous pressure of adopting this disruptive technology. While the interest is very high, most enterprises are grappling with the challenge on where to start and what best approach to use. Investments from CIO budgets are being quickly carved out, but the basic dilemma remains on early use cases, security & privacy issues with enterprise data and which platforms & tools to leverage. Relevance Lab has launched an “AI Taskforce” that covers key internal participants and customer advisory teams for this innovation. The primary focus is to define core and priority themes relevant for business and customers based on current assessment. This is an emerging space with a lot of global investments and innovation expected to drive major disruption in the next decade. We believe that requires an iterative model for strategy and an agile approach with focused concept incubations to work in close collaboration with our customers.

Customer Needs for AI Adoption
The most common ask from all customers is using GenAI for their business with the primary goals around the following business objectives:

  • Enhancing their end customer experience and business outcomes.
  • Saving costs with better efficiency leveraging the new AI models & interaction channels.
  • Improving their core Products & Offerings with AI to ensure the business does not get disrupted or irrelevant against competition.

The figure below captures the summary of customer asks, common business problems, and categories of solutions being explored.



Translating the above objectives to meaningful and actionable pursuits require focusing on key friction points and leveraging the power of AI. Some common use cases we have encountered are following:


  • Increasing online-user purchases and conversions by 20% with personalized customer experiences.
  • Better revenue realization with Dynamic Pricing and Propensity analysis.
  • Lesser subscription renewal loss with early detection & engagement with 90%+ Predictability.
  • Wastage reduction (by 10M US$ annual) for global pharma with AI-Led Optimization Algorithms.
  • Better price realization for procurement by 15%+ Anomaly Detection in Plan Purchase Analytics.
  • Better information aggregation and curation for mortgages with Machine Learning (ML) classifications.

Following are early initiatives being taken for our customers leveraging GenAI:

  • Pharma Product Reviews Summary and Advisor with GenAI.
  • Deployment of Private Foundation Models and training with custom data & business rules for Advisory services in Financial Services.
  • Use of Chatbots for easier user and customer support for Media customers.
  • Access to Business Dashboards with Generative Models using prompts for E-Commerce customers in Retail.
  • Increasing productivity of Development and Testing efforts with GenAI specialized tools for Technology ISVs.

There is no doubt that the momentum of such early technology adoption is growing everyday. This needs a structured program for collaboration with our customers to look for common building blocks and rapid models’ creation, training, deployment, interactions, and fine-tuning.

Relevance Lab AI Compass Framework
We have launched the “Relevance Lab AI Compass Framework” to guide and collaborate with customers in defining the early areas of focus in building solutions leveraging AI. The goal is to have this as a prescriptive model helping jumpstart the adoption of Enterprise and GenAI “The Right Way”. The figure below explains the same.

The AI Compass Framework takes a 360 degrees perspective on assessment of AI needs for an enterprise across the following pillars.

  • Product Engineering – building products that embed the power of AI
  • Business Data Decisions enhanced with AI
  • Machine Data Analysis enhanced with AI
  • Using GenAI for Business
  • Platform AI Competences – choosing the right foundation
  • Cloud AI Services – leveraging the best of breed
  • Digital Content with GenAI
  • Robotic Proces Automation enhanced with AI and Intelligent Document Generation
  • Preparing Enterprise Workforce – Training with AI
  • Managed Services & Support made more efficient & cost effective
  • Improving internal Tester and Quality Productivity with AI Tools
  • Developer Productivity enhancements with AI Tools


Relevance Lab is getting deeper into the above pillars and building the right design patterns for guiding our customers on “The Right Way” for enterprise adoption. The plan is also to build a foundation AI applications platform that will speed up adoption for end customers, saving them time, effort and with quality deliverables.

Product Engineering with AI 
This pillar focuses on how to make AI Architecture and Design patterns part of better Product Design. The charter is to find and recommend new architectures and integration with new GenAI models for making existing software products smarter with embedded AI techniques. We expect new products to adopt an “AI-First” approach to new developments. Every product in their focused vertical (Healthcare & Life Sciences, BFSI, Media & Communication, Technology) will need to embed AI into their core architecture.

Business Data Decisions with AI
This pillar defines AI-enhanced Data Engineering for common use cases and building blocks.  The traditional focus of AI initiatives has been on using primarily giving agile & actionable insights to the following: 

  • What happened in my business – this is Informative? 
  • What will happen – this is Predictive? 
  • What should be done – this is Prescriptive? 

The new dimension GenAI has added to the above is around “Generative” capabilities. Along with the need for building new features, there is growing adoption of popular data platforms like Databricks, Snowflake, Azure Data Factory, AWS Data Lake etc. that need to integrate with product specific AI enhancements.

Machine Data Analysis with AI
Customers already have focus on DevOps and AIOps with large data generated from Servers, Applications, Networks, Security, and Storage using different monitoring tools. However, there is a deluge of information and need of reducing noise and improving response times for effective operations support. This needs Alert Intelligence to reduce alert fatigue and incident intelligence to observe data across layers for faster issue diagnosis and fixes. Anomaly detection is a key need with time series data to look for odd patterns and flag risks for security, vulnerabilities, etc. While AIOps brings together the need for AI across Observability, Automation, and Service Delivery there are ways to leverage new GenAI tools for better Chatbots support in reducing operational costs and increasing efficiency. A common ask by customers is about the ability to predict a failure and prevent an outage in real-time with AI using these models. This requires design of Site Reliability Engineering (SRE) solutions to be more effective with AI techniques.

Like Infra and Apps intelligent observability with AI/ML Models, there is a growing need for Data Pipelines Observability with specialized models. With growing scale of ML Models, there is need to track drift across design, model and data for such pipelines with dashboards for visualization and actionable analytics.

Using GenAI for Business
One of the most common asks is to leverage ChatGPT APIs and suggest ways to leverage the disruptive technology for existing customer and internal needs. Leveraging this tool to reduce internal costs and improve external end customer experience with quick projects to define common use cases and how to get deeper with customer’s specific personal data and models.        

We are working with early adopter customers on how to prepare and leverage GenAI for their business problems across different verticals. All large enterprises have carved out special initiatives on “How to Use GenAI” and we offer a unique program to incubate these projects.

Platform AI Competencies 
These platforms are leading innovation and solutions for companies to build specialized applications leveraging AI in areas of Open Source LLMs (Large Language Models), OpenAI APIs, Reusable models library, TensorFlow, Hugging Face, Open-Source LangChain Library, Microsoft Orca, Databricks, etc. This pillar gets deep into specialized use cases for feature extraction, text classification, prompt engineering, Chatbots, Summarisation, Generative Writing, Ideation, Reinforced Learning etc.

Cloud AI Services 
With significant existing investments of customers on public cloud providers like AWS, Azure, and GCP there is a growing need for leveraging specialized AI offerings from these providers to jumpstart adoption with security and scalability in enterprise context. Also, there is a growing momentum of new GenAI solutions from these providers like AWS CodeWhisperer, Amazon Bedrock, Azure Synapse, Microsoft Responsible AI, and specialized tools & training from Google Cloud. The growing adoption will require deep understanding and support for MLOps and LLMOps for efficient and cost-effective operations. 

Digital Content with GenAI 
One of the biggest impacts with GenAI is the evolution of smarter search and information access across customers’ existing repositories of documents, FAQ, content platforms, product brochures etc. These cover all sorts of unstructured and semi-structured information. Customers are looking to leverage Public and Proprietary LLM (Large Language Models) Models with their personal data repositories and fine-tuned models of custom business rules. This requires customers to build, train, and deploy their own models with control on security and data privacy protected.

The right architectures will have a balance between different approaches of using standard models with enterprise data vs privately deployed models for enterprise content solutions.

Managed Services & Support AI 
Chatbots and GenAI can help improve the Support Lifecycle of Monitoring, ServiceDesk, TechOps, Desktop Support, User Onboarding/Offboarding. They can help in cost reduction and become more efficient in daily tasks.       

This aligns with customers focus on Managed Services, ServiceDesk, Command Centre, Technical Operations, and Security Ops. This pillar looks deeper into exploring AI techniques for Incident Intelligence, Chatbots, Automation, Self-Remediation, and Virtual Agents to be more productive and efficient. Relevance lab has leveraged “Automation-First” approach for greater productivity, effective operations & compliance.  

Robotic Process Automation with Intelligence
RPA (Robotic Process Automation) is bringing significant gains for business process automation in areas of repetitive & high frequency tasks along with better quality & compliances for use cases across different industries and corporate functions. With AI, a lot of additional benefits can be achieved for making business process frictionless. This pillar focuses on specialized use cases related to AI-Driven BOTs, Data & Documents Processing, Intelligent Decisioning by leveraging AI tools from key partners like UiPath & Automation Anywhere. 

Training with AI Technology
Companies are embarking on the goal to make all their employees AI skilled and certified. Leveraging AI tools in everybody’s day-to-day charter will improve the job efficiency. This requires setting up an AI-Lab for internal trainings and certifications. To create such a strong foundation, this pillar is looking into creating an AI Academy and have a program that drives “Self-Service Learning” and “Accreditation” based on a structured program.

Developer & Testing Productivity with AI Tools 
Adoption of AI and GenAI tools are key goals for smarter, faster, better outcomes. For testers, the specific areas of focus are around Automated Test Case Generation, Integration Test Generation, Security Co-Pilot, Performance Assessment, Simulated Data Gen. For developers, similar plans for boosting productivity using Developer Co-Pilot, Auto-Unit Tests, GenAI Code Assist, Compliance AI.

Co-Development Opportunities with Customers 
As part of expediting the innovation in this emerging area, we are launching a co-development program with early participants to build on use cases specific to customer verticals and domain needs. We have dedicated specialized teams working on deep GenAI and Enterprise AI skills and building re-usable components. We are offering a special six-week program for incubation and jumpstart of GenAI adoption by enterprises to build one specific use case.

To know more about how to collaborate and sharing your ideas for GenAI early adoption, contact us at AICompass@relevancelab.com

References
AI Foundation Model: Generative AI on AWS
Azure OpenAI on your Data
Google Generative AI Service Offerings Designed to get you up and Running Fast
Revolutionize your Enterprise Data with ChatGPT
A CIO and CTO Technology Guide to Generative AI



0

2023 Blog, Cloud Blog, Blog, Featured

Currently, all large enterprises are dealing with multi-cloud providers and the situation is more complicated where M&A has led to multiple organizations integrations and multiple vendors across Infrastructure, Digital, Enterprise Systems, and Collaboration tools bring their own Cloud footprints bundled with services. In this blog, we try to explain the common scenario being faced by large companies and how to create “The Right Way” to adopt a scalable Multi-Cloud Workload Planning and Governance Models.

Customer Needs
The customers facing such challenges usually share with us the following brief:

  • Assess existing workloads on AWS, Azure, GCP for basic health & maturity diagnostics.
  • Suggest a mature Cloud Management & Governance model for ensuring “The Right Way” to use the Cloud for multi-account, secure, and compliant best practices.
  • Recommend a model for future workloads migration and choice of cloud providers for optimal usage and ability to move new workloads to cloud easily.

Primary Business Drivers
Following are the key reasons for customers seeking Multi-Cloud Governance “The Right Way.”

  • Cost optimization and tracking for existing usage.
  • Ability to launch new regions/countries in cloud with easy and secure standardized processes.
  • Bring down cost of ownership on Cloud Assets – Infra/Apps/Managed Services with leverage of Automation and best practices.

Approach Taken
The basic approach followed for helping customers through the multi-cloud maturity models involves a PLAN-BUILD-RUN process as explained below:

Step-1: Planning & Assessment Phase
This involves working with customer teams to finalize the Architecture, Scope, Integration and Validation Needs for Cloud Assessment. The primary activities covered under this phase are following:

  • Coverage Analysis
    • Do a detailed analysis of all three Cloud Providers (AWS, Azure, GCP) and recommend what should be an ongoing strategy for Cloud Provider adoption.
  • Maturity Analysis
    • Do an assessment of current Cloud usage against industry best practices and share the maturity scorecard of customer setup.
  • Security Exposure
    • Find key gaps on security exposure and suggest ways for better governance.
  • Cost Assessment
    • Consolidation and cost optimization to have more efficient cloud adoption.

The foundation for analysis covers Cloud Provider specific analysis based on Well-Architected Frameworks as explained in the figure below:



Step-2: Build & Operationalize Phase
This primarily involves adoption of mature Cloud Governance360 and Well-Architected Models with best practices across key areas.

  • Accounts & Organization Units
  • Guardrails
  • Workloads Migration
  • Monitoring, Testing, Go-Live & Training
  • Documentation, Basic Automation for Infrastructure as Code
  • SAML Integration

The playbook for Build & Operationalize phase is based on Relevance Lab prescriptive model for using Cloud “The Right Way” as explained in the figure below.



Step-3: Ongoing Managed Services Run Phase
Post go-live on-going managed services ensure that the best practices created as part of foundation are implemented and “Automation-First” approach is used for Infrastructure, Governance, Security, Cost Tracking and Proactive Monitoring. Common activities under Run phase cover regular tasks a snapshot of what is provided below:

Daily Activities:

  • Monitoring & Reporting – App & Infra using CloudWatch – Availability, CPU, Memory, Disk Space, Security blocked requests details, Cost using Cost Explorer.
  • Alert acknowledgement and Incident handling.
  • Publish daily report.

Weekly Activities:

  • Check Scan Reports for most recent critical vulnerabilities.
  • Monitor Security Hub for any new critical non-compliances.
  • Plan of action to address the same.

Monthly Activities:

  • Patch Management.
  • Budgets Vs Costs Report.
  • Clean-up of stale/inactive users/accounts.
  • Monthly Metrics.

ServiceOne framework from Relevance Lab provides a mature Managed Services Model.

Sample Assessment Report
The analysis is done across 4 key areas as covered under Plan phase and explained below.

  • Cloud Provider Specific Analysis
    • Workload distribution analysis across all three providers, also mapped to 50+ different Best Practices Questionnaire.
  • 5-Pillars Well-Architected Analysis
    • Architecture & Performance Efficiency, Cost Optimization, Reliability & DR, Operational Excellence & Standardization, Security.
    • Global workloads analyzed across all different environments.
  • Security Findings
    • Identified Environments on Azure with significant exposure that needs fix.
    • Also suggested AWS Security Hub for formal scorecard and specific steps for maturity.
  • Cost Optimization
    • Analyzed costs across Environments, Workloads, and Apps.

Based on the above a final Assessment report is created with recommendations to fix immediate issues while also addressing medium term changes for ongoing maturity. The figure below shows a sample assessment report.



Summary
Relevance Lab is a specialist company in cloud adoption and workload planning. Working with 50+ customers on multiple engagements, we have created a mature framework for Multi-Cloud Workload and Governance Assessment. It is built on the foundation of best practices for Cloud Adoption Framework (CAF) and Well-Architected Frameworks (WAF) but enhanced with specific learnings and accelerators based on Goverenance360 and ServiceOne offerings to speed up a transition from un-managed & ad-hoc models to “The Right Way” of multi-cloud foundation.

To know more on how we can help feel free to contact us at marketing@relevancelab.com

References
AWS Well-Architected
Microsoft Azure Well-Architected Framework
Google Cloud Architecture Framework
AWS Cloud Adoption Framework (AWS CAF)
Microsoft Cloud Adoption Framework for Azure
Google Cloud Adoption Framework



0

2023 Blog, BOTs Blog, Blog, Featured

Relevance Lab is an Automation specialist company providing BOTs and Platforms for Business Processes, Applications, and Infrastructure. Our solutions leverage leading RPA (Robotic Process Automation) tools like UiPath, Automation Anywhere & Blue Prism. We provide re-usable templates for common use cases across Finance & Accounting, HR, IT, and Sales process automation.

By leveraging our robotic process automation services, our clients have realized:

  • 60-80% cost savings
  • 2-3x increase in process speed
  • 35-50% increase in employee productivity
  • Upto 30% FTE (Full Time Equivalent Headcount Reduction)

The biggest challenge in adoption of RPA for our customers primarily comes in identifying “where to start” dilemma. To help identify “what can be automated” we have designed the following guidelines to help with initial use cases for implementation:

  • High frequency and volume workflows
  • High complexity processes
  • High error prone and human task quality related areas
  • Domains with compliance needs with benefits of automated outcomes

Using these broad guidelines across a set of corporate functions we have commonly encountered the following use cases for RPA.

Finance & Accounting Automation

  • Stock Price Update
  • Purchase Order Process
  • Reconciliation Process
  • Payment Process
  • Financial and Loan Origination Process
  • Lease Accounting Process
  • Journal Process
  • Inventory Control Process
  • Error Audit Process
  • Invoice Process

Human Resources (HR) Automation

  • New Hire Onboarding Process
  • Data Approval Process
  • The Policy Processing (TPP)
  • Off-boarding Process
  • Legacy (AS/400) Process
  • Document Handling
  • Employee/HR/IT Process
  • User and Workspace- Employee/Contractor Offboarding
  • Back to Office (COVID) workflow automation and compliances

Infrastructure (IT) Management Automation

  • Distribution List Process
  • User Account Re-conciliation Process
  • Mailbox Automation & Reconciliation Process
  • User Migration & Access Control Verification Process
  • Logs Capture

Sales Automation

  • Contract Data Extraction
  • Sales Reporting
  • Sales Reconciliation Process
  • Material Edits Adjustments

With our comprehensive suite of RPA services, we have not only helped businesses adopt, but also maximize their investments in RPA.

The figure below explains the RPA Top Use Cases solved by Relevance Lab.



RL RPA Offerings
RPA Consulting/Assessment: RPA consulting and assessment is the process of evaluating organization’s processes and identifying opportunities for automation. It is essential for ensuring that RPA implementation is successful.

RPA Implementation: RPA implementation is the process of deploying and using RPA bots to automate processes. It is essential for realizing the benefits of RPA.

Automation Design: Automation design is the process of designing and implementing automation solutions. It involves understanding the business needs, identifying the processes that are suitable for automation, and designing and implementing the automation solutions.

Automation Support: Automation support is the process of providing support to users of automation solutions. It involves providing help with troubleshooting problems, resolving issues, and providing training on how to use the automation solutions.

The figure below explains our core offerings.



Relevance Lab “Automation-First” RPA Platform Architecture
Applications under Robotic Process Execution
RPA is well suited for enterprises and enterprise applications like ERP solutions (For example, SAP, Siebel, or massive data processing or records processing applications like Mainframes). Most of these applications are data-centric and also data-intensive with loads and loads of setup and repetitive process activities.

RPA Tools

  • It has the ability to automate any type of application in any environment.
  • Develop software robots that understand recordings, configuring, and enhancing these with programming logic.
  • Build reusable components which can further be applied to multiple robots, ensuring modularity and faster development and at the same time easier maintenance.

RPA Platforms
Ability to develop meaningful analytics about robots and their execution statistics.

RPA BOT Workbench
RPA execution infrastructure can sometimes be a bank of parallel physical or virtual lab machines, which can be controlled based on usage patterns. Scaling up or down the number of machines in parallel to achieve the task of automation can also be done, and this can be left unattended for as long as you like (as this requires no further human interaction or intervention).

The figure below explains the Relevance Lab “Automation-First” RPA Platform Architecture.



How to get started for new customers?

  • Reach out to Relevance Lab (write to marketing@relevancelab.com) for a quick discussion and demonstration of the standard solution
  • We will study the processes and help in identifying repetitive and manual tasks
  • Engage in creation of POC while selecting the right RPA Tool
  • Customers with standard needs can get started with a new setup in 4-6 weeks
  • Relevance Lab will also provide on-going support and managed services


Summary
Relevance Lab Automation at a Glance

  • RL has been Automation Specialist since 2016 (7+ Years).
  • Implemented 30+ successful customer automation projects covering RPA lifecycle.
  • Globally has 60+ RPA specialists with 150+ certifications.
  • Automated over 100+ processes, which includes customized solutions for industries like across Healthcare, BFSI, Retail and Technology Services & Manufacturing.

References
CoE Manager|Automation Anywhere
Build Your Robotic Process Automation Center of Excellence (uipath.com)



0

2023 Blog, command blog, Research Gateway, Blog, Featured

Secure Research Environments (SRE) provide researchers with timely and secure access to sensitive research data, computation systems, and common analytics tools for speeding up Scientific Research in the cloud. Researchers are given access to approved data, enabling them to collaborate, analyze data, share results within proper controls and audit trails. Research Gateway provides this secure data platform with the analytical and orchestration tools to support researchers in conducting their work. Their results can then be exported safely, with proper workflows for submission reviews and approvals.

The Secure Research Environments build on the original concept of Trusted Research Environment defined by UK NHS and uses the five safes framework for safe use of secure data. The five elements of the framework are:

  • Safe people
  • Safe projects
  • Safe settings
  • Safe data
  • Safe outputs

There are the following key building blocks for the solution:

  • Data Ingress/Egress
  • Researcher Workflows & Collaborations with costs controls
  • On-going Researcher Tools Updates
  • Software Patching & Security Upgrades
  • Healthcare (or other sensitive) Data Compliances
  • Security Monitoring, Audit Trail, Budget Controls, User Access & Management

The figure below shows implementation of SRE solution with Research Gateway.



The basic concept is to design a secure data enclave in which there is no ability to transfer data into or out of without going through pre-defined workflows. Within the enclave itself any amount or type of storage/compute/tools can be provisioned to fit the researcher’s needs. There is capability to use common research data and also bring in specific data by researchers.

The core functionality for SRE deals with solutions for the following:
Data Management and Preparation
This deals with “data ingress management” from both public and private sources for research. There are functionalities dealing with data ingestion, extraction, processing, cleansing, and data catalogs.

Study Preparation
Depending on the type of study and participants from different institutions, secure data enclave allows for study specific data preparation, allocation, access management and assignment to specific projects.

Secure Research Environment
A controlled cloud environment is provided for researchers to access the study data in a secure manner with no direct ingress-egress capability and conduct research using common tools like JupyterLab, RStudio, VSCode etc. for both interactive and batch processing. The shared study data is pre-mounted on research workspaces making it easy for researchers to focus on analysis without getting into complexity of infrastructure, tools and costs.

Secure Egress Approvals for Results Sharing
Post research if researchers want to extract results from the secure research environment, a specialized workflow is provided for request, review, approvals, and download of data with compliance and audit trails.

The SRE Architecture provides for Secure Ingress and Egress controls as explained in the figure below.



Building Block Detailed Steps
Data Management
  • Project Administrator creates the Data Library and research projects.
  • Project Administrator selects the Data Library project.
    • Sets up Study Bucket.
    • Creates the sub-folders to hold data.
    • Sets up an Ingress bucket for each researcher to bring in his own data.
    • Shares this with the researcher.
  • Project Administrator selects the Study screen.
    • Creates an internal study for each dataset and assign to the corresponding Secure Research project.
    • Creates internal study for each ingress bucket.
  • Project Administrator assigns the researchers to the corresponding secure projects.
Secure Research Environments
  • Researcher logs in.
  • Research uploads own data to ingress bucket.
  • Researcher creates a workspace (secure research desktop).
  • Researcher connects to workspace.
  • Researcher runs code and generates output.
  • Researcher copies output to egress store.
  • Researcher submits and egress request from the portal.
Egress Application
  • Information Governance lead logs in to Egress portal.
  • IG Lead approves request.
  • Project administrator logs in to portal.
  • Project administrator approves the request.
  • IG Lead logs in and downloads the file.

The need for Secure Research Enclave is a growing one across different countries. There is an emerging need for a consortium model, where multiple Data Producers and Consumers need to interact in a Secure Research Marketplace Model. The marketplace model is implemented on AWS Cloud and provides for tracking of costs and billing for all participants. The solution can be hosted by a third-party and provide Software as a Service (SaaS) model driving the key workflows for Data Producers and Data Consumers as explained in figure below.



Summary
Secure Research Environments are key features for enabling large institutions and governmental agencies to speed up research across different stakeholders leveraging the cloud. Relevance Lab provides a pre-built solution that can speed up the implementation of this large scale and complex deployment in a fast, secure, and cost-effective manner.

Here is a video demonstrating the solution.

To know more about this solution, feel free to write to marketing@relevancelab.com.

References
UK Health Data Research Alliance – Aligning approach to Trusted Research Environments
Trusted (and Productive) Research Environments for Safe Research
Deployment of Secure Data Environments on AWS
Microsoft Azure TRE Solution



0

2023 Blog, SWB Blog, Blog, Featured

Research computing is a growing need and AWS cloud enables researchers to process big data with scalable computing in a secure and flexible manner. While Cloud computing is a powerful platform it also brings complexity with new tools, nomenclature and multiple options that distract researchers. Relevance Lab is partnering with AWS Public sector group and some leading US universities to create a frictionless “Research Data Platform (RDP)” leveraging open-source solutions.

Service Workbench from AWS is a powerful open-source solution for enabling research in cloud. Customers around the globe are already using this solution for common use cases.

  • Enable researchers to use AWS Cloud with Self-service capabilities and common catalog of tools like EC2, SageMaker, S3, Studies data etc.
  • Use common Data Analysis tools like RStudio in a secure and scalable manner.
  • Setup a “Trusted Research Environment” in cloud for research with additional controls that enforce Ingress/Egress data restrictions for compliance.

While Service Workbench provides a good foundation platform for research, it also had some challenges based on feedback from early adopters mainly related to following:

  • Complex setup requiring deep cloud know-how.
  • An Admin centric User Experience not very Researcher friendly.
  • Scalability challenges while adopting large scale research setups.
  • Hard to customize.
  • No enterprise support models available to guide customer through a Plan-Build-Run lifecycle.

Relevance Lab has built a modern and researcher friendly User Experience solution called “Research Data Platform” in collaboration with AWS and its early adopters extending the open-source foundation.

Key Functionalities of Research Data Platform
The primary goal is to drive frictionless research in cloud with following key features:

  • Built as an open-source solution and made available to institutions interested in collaborating on a common Data Science Platform for research.
  • “Project Centric” model enabling collaboration of researchers with common data, tools, and research goals in a self-service manner.
  • Modern architecture with support for containers enabling researchers to bring their own tools covering Web-based software, Desktop-based tools, and Terminal-based solutions seamlessly accessed from Researcher Data Platform.
  • Enable researchers to launch applications and choose configurations without knowledge of Cloud Infrastructure details for both regular and GPU workloads.
  • Integrate with Datasets for research that are project centric and with a browser based easy interface to upload/download data for research.
  • Ability to run multiple research projects across different AWS accounts with secure and scalable setup and guardrails.

The key functions flows needed for a Researcher are explained in the figure below:



Here is link for a demo of the solution.

Solution Architecture of Research Data Platform
The building blocks for the solution leverage the Service Workbench functionality and creates a separate Researcher Data Platform (RDP) layer for providing a UI driven application to Researchers roles and Admin users. The figure below captures the building blocks for this solution.



The solution consists of the following components:

  • Webserver that serves the UI for the platform. The UI provides the entire researcher user experience whereby users can log in with their credentials and access the projects made available to them. Within the projects, users can launch applications that have been configured for them by the administrator. Users can choose the required configuration of the instances based on configurations created by the administrator.
  • Research Data Platform DB. This database stores some of the configuration information and the mapping information required to faciliate the use of the underlying “Service Workbench” open-source software.
  • Research Data Platform CLI. This command line interface allows the administrator to set up and configure projects, users, datasets, launchers and configurations easily.
  • Service Workbench. This open-source software from AWS is the underlying API-driven engine that orchestrates and manages all the AWS resources on behalf of the user.

Deployment Architecture of Research Data Platform
The solution is deployed in an enterprise model for each customer in their AWS accounts and recommends the following architecture based on AWS Well Architected Framework as explained in figure below.



The deployment of the Research Data Platform consists of the following:

  • One “Main” AWS account where RDP is deployed along with the Service workbench from AWS.
  • Within the main account, Service Workbench is deployed as a serverless solution driven by APIs. It stores data in a DynamoDB database and uses AWS Service Catalog to manage and orchestrate resources. It uses Amazon S3 to create buckets that hold data.
  • Within the main account, the Research Data Platform is deployed as a web server that serves the UI, along with an API backend that communicates with the Service Workbench.
  • One or more project accounts are onboarded and can be used to create projects and access datasets.

Sample Screens for Research Data Platform
The key functionality for the solution is explained in some sample screens below.

Home Page: This is the first page that the user visits. From this page the user can choose to login to the Research Data Platform.



Projects Page: The projects page displays a card view of all the projects that the logged-in user is assigned to. Projects are set up by the administrator.



Each application that is useful to a researcher is set up as a launcher. Each launcher appears on the project workbench page as a card and the researcher can instantiate a session by clicking on the launcher card.



Files tab: This screen allows the researcher to browse the files in the datasets that are assigned to the project. A default storage area called project storage is available in every project. The project storage can also be browsed from this screen.



Launch Dialog: The user can select a configuration that is suitable for their research.



Project Details: The user can connect to Active sessions from the Workbench tab.



Sessions: An instance of a launcher is called a session. A user can connect to a session via the browser to access the application they need for conducting their research and analysis.


How Can New Customers Get Started?

  • Reach out to Relevance Lab (write to rlcatalyst@relevancelab.com) for a quick discussion and demonstration of the standard solution
  • We will capture an assessment of standard features vs know gaps for adopting the solution
  • Engage on a Plan-Build-Run model based on deployment, enablement and operational readiness to start using Research in AWS cloud with simple and secure best practices
  • Customers with standard needs can get started with a new setup in 8-10 weeks
  • Relevance Lab will also provide on-going support and managed services

Conclusion
The Research Data Platform offers a comprehensive and researcher-friendly solution. It empowers researchers to process big data, perform data analysis, and conduct research efficiently in a secure and scalable manner. By bridging the gap between researchers and the AWS cloud, the RDP fosters innovation and advances scientific discovery in diverse domains.

References
Managing compute environments for researchers with Service Workbench on AWS
Using AWS Cloud for Research
Five ways to use AWS for research (starting right now)



0

2023 Blog, AWS Platform, Blog, Featured, Feature Blog

Relevance Lab (RL) has been an AWS (Amazon Web Services) partner for more than a decade now. While the journey started as a Services Partner it has now extended and matured to a niche technology partner with multiple solutions being offered on AWS Marketplace.

Here is a Quick Snapshot of AWS Capabilities:

  • RL is involved in Plan-Build-Run lifecycle of Cloud adoption by enterprises over a multi-year transformation journey.
  • The approach to Cloud Adoption is built on some key best practices covering Automation-First Approach, DevOps, Governance360, and Application-Centric Site Reliability Engineering (SRE) focus.
  • In Cloud Managed Services we cover all aspects of DevOps, AIOps, SecOps and ServiceDesk Ops leveraging our Automation Platforms – RLCatalyst BOTs, Command Centre, ServiceOne.
  • Involved with 50+ Cloud engagements covering large scale (5000+ nodes, 15+ regions, 200+ apps, 5.0+M annual spends) setups and optimization.
  • Deep partnership with AWS and ServiceNow to bring end-to-end Governance360 covering Asset Management, CMDB, Vulnerability & Patch Management, SIEM/SOAR, Cost/Security/Compliance Dashboards.
  • Products created and deployed on AWS for Self-Service Cloud Portals and Purpose-built cloud solutions covering HPC (High Performance Computing), Containers, Service Catalog, Cost & Budget tracking, and Scientific Research workflows.
  • Our work and resources cover Cloud Infrastructure, Cloud Apps, Cloud Data and Cloud Service Delivery with 800+ cloud trained resources, 450+ Cloud specialists and 100+ certifications.
  • RL is global number one preferred partner for AWS as an ISV provider for Scientific Research Computing building solutions using AWS Open-Source solutions like Service Workbench.


Our unique positioning of Products + Services helps create platform-based offerings delivered as playbooks for digital transformation.

Our key focus areas in Cloud Offerings are the following:

  • Cloud Management & Governance
  • Full Lifecycle Automation and Self-Service Portals
  • Containers, Microservices, Well Architected Frameworks and Kubernetes
  • AIOps and Site Reliability Engineering

What Makes Us Different?

  • Automation-First approach across “Plan, Build & Run” Lifecycle helps customers use “Cloud the Right Way” focused on best practices like “Infrastructure as a Code” and “Compliance as a Code.”
  • RLCatalyst Products offer Enterprise Cloud Orchestration and Governance with a pre-built library of quick-starts, BOTs, Self-Service Cloud Portals, and Open-source solutions.
  • AWS + ServiceNow unique specialization leveraged to provide Intelligent Cloud Operations & managed services.
  • ServiceOne AIOps Platform covering workload migration, security, governance, CMDB, ITSM and DevOps.
  • Frictionless Digital Application modernization and Cloud Product Engineering services for native cloud architecture and competencies.
  • Open-Source Co-Development with AWS for Scientific Research Solutions (Higher Ed and Healthcare).
  • Agile Analytics with our Spectra Data platform that helps building Enterprise Data Lakes and Supply Chain analytics by with multiple ERP systems connectors.

Our Solutions Sweet Spot
Governance360
Built on AWS Control Services a prescriptive and automated maturity model for proper workload migration, governance, security, monitoring and Service Management.

RLCatalyst BOTS Automation Engine and ServiceOne
Product covering end-to-end automation with a library of 100+ pre-built BOTs. Intelligent user and workspaces onboarding and offboarding.

Research Gateway – Self Service Cloud Portals
Self-Service Cloud Portal for Scientific Research in Cloud with HPC, Genomic Pipelines, covering EC2, SageMaker, S3 etc.

ServiceNow AppInsights built on AWS AppRegistry
Dynamic Applications CMDB leveraging AWS and ServiceNow with focus on Application Centric costs, health, and risks.

DevOps Driven Engineering and Cloud Product Development
DevOps-driven CI/CD, Infra Automation and Proactive Monitoring. AWS Well-architected. Cloud App Modernization, APM, API Gateways, Cloud Integration with Enterprise Systems. AWS Digital Customer Experience competencies

SPECTRA Data Platform for Cloud Data Lakes
Enterprise Data Lake with large data movement from on-prem to Cloud systems and ERP integration adapters for Supply Chain Analytics.

AWS Product Focus Areas
Control Tower, Security Hub, Service Catalog, HPC, Quantum Computing, Data Lake, ITSM Connectors, Well-Architected, SaaS (Software as a Service) Factory, Service Workbench, CloudEndure, AppStream 2.0, QuickStart for HIPPA, Bioinformatics

Focus on Software, Databases, Workloads
Open-source and App development stacks, Java, Python, MS .Net, Cloudera, Databricks, MongoDB, RedShift, Hadoop, Snowflake, Magento, WordPress, Moodle, RStudio, Nextflow


Key Verticals Solutions

  • Technology companies (ISVs & startups)
  • Media/Publishing/Higher Education/ Research
  • Pharma/Healthcare/Life Sciences
  • Financial and Insurance

The following are some Customer Solutions highlights:


Digital Publishing & Learning Specialist Cloud Migration, DevOps, Digital Platform Development covering Content, Commerce, E-Learning and CRM products, User Experience Designs, Cloud Arch, Data Cloud/BI, Sustaining, Perf testing, Automation
Global Pharma & Health Sciences Leader Data Analytics/Search Solutions leveraging Cloud & Big Data technologies. Enterprise Data Lake Analyzing ERP Data (SAP and others) to extract and load and associated cleansing, aggregation, data modelling and visualizations. Self Service Portal for AWS and Hybrid Cloud provisioning
Large Financial & Asset Mgmt. Firm Drive Cloud Adoption, App Modernization and DevOps models as part of IT Transformation journey leveraging their Cloud, Automation and Data Platforms.
Specialist Automation ISV Global partnership working across joint long-term engagements with multiple enterprise customers covering Infrastructure Automation, Application Deployment Automation, Compliance-as-a-Code and Hybrid Cloud Automation.

Summary
Relevance Lab has close collaboration and partnership with AWS for both products and competencies. We have been part of successful digital transformation with 50+ customers leveraging AWS across Infrastructure, Applications, Data Lakes, and Service Delivery Automation. We enable AWS Cloud adoption “The Right Way” with our comprehensive expertise and pre-built solutions better, faster, and cheaper.

Learn more about our cloud products, services, and solutions, feel free to contact us at marketing@relevancelab.com.

References
Get Dynamic Insights into Your Cloud with an Application-Centric View
Automation of User Onboarding and Offboarding Workflows



0

PREVIOUS POSTSPage 1 of 8NO NEW POSTS