Your address will show here +12 34 56 78
2020 Blog, Blog, Feature Blog, Featured

Relevance Lab in partnership with ServiceNow and AWS has launched a new solution (ServiceNow scoped application) to consume Intelligent Automation BOTs from within ServiceNow self-service Portal with 1-Click automation of assets and service requests using the Information Technology Service Management (ITSM) governance framework. This RLCatalyst BOTs Service Management (RLCatalyst BSM) connector is available for private preview and will very soon be also available on ServiceNow Marketplace. It integrates with ServiceNow self-service Portal and Service Catalog to dynamically publish an enterprise library of BOTs for achieving end to end automation across infrastructure, applications, service delivery and Workflows. This solution builds on the concept of “Automation Service Bus” architecture explained in a blog earlier.

The biggest benefit of this solution is a transition to a “touchless” model for automation within ServiceNow Self Service Portal with a dynamic sync of enterprise automation libraries. It provides an ability to add new automation without a need to build custom forms or workflows inside ServiceNow. This makes creation, publishing and lifecycle management of BOTs automation within the existing governance models of ITSM and Cloud frictionless leading to faster rollout and ROI. Customers adopting this solution can optimize ServiceNow and Cloud operations costs significantly with self-service models. A typical large enterprise Service Desk team gets a huge volume of inbound tickets on a daily basis and more than 50% of these can be re-routed to self-service requests with a proper design of service catalog, automation and user training. With every ticket fulfilment cost (normally US $5-7) now handled by BOTs there is a significant and measurable ROI along with faster fulfilment, better user experience and system based compliance that helps in audits.

Following are the key highlights of this solution

  • Rendering of RLCatalyst BOTs under ServiceNow Service Catalog for 1-Click order and automation with built in workflow approval models.
  • Ability of ServiceNow Self Service users to order any Automated Service Request from this standard catalog covering common workflows like.
    • Password Reset Requests.
    • User Onboarding.
    • User Offboarding.
    • AD/SSO/IDAM integration.
    • Access and Control for apps, tools, and data.
    • G-Suite/O365/Exchange Workflows.
    • Installation of new software.
    • Any standard service request made available by enterprise IT in a standard catalog.
  • Security and approvals integrated with existing ServiceNow and AD user profiles.
  • Ability to involve any BOT from the RLCatalyst BOTs server that provides integration to agent base, agent-less, Lambda function, scripts, API based, UI based automation functionality.
  • A pre-built library of 100+ BOTs provided as out-of-the-box solution.

As a complementary solution to AWS Service Management connector customers can achieve complete automation for their Asset and Service Requests with Secure Governance. For assets being consumed on non AWS footprints like VMWare, Azure, On-prem systems, the solution supports automation with Terraform templates to address hybrid-cloud platforms.

What are BOTs?
Any Automation functionality dealing with common DevOps, TechOps, ServiceOps, SecurityOps and BusinessOps. BOTs follow an Intelligent Automation maturity model as explained in this blog earlier.

  • BOTs Intelligent Maturity Model
    • Task Automation.
    • Process Automation.
    • Decisioning Driven Automation.
    • AI/ML Based Automation.

BOTs vs Traditional Automation

  • BOTs are reusable – separation of Data and Logic.
  • BOTs support multiple models – AWS Lambda Functions, Scripts, Agent/Agentless, UIBOTs etc with better coverage.
  • BOTs are managed in a Code repository with Config Management (Git Repo) – this allows the changes to be “Managed” vs “Unmanaged scripts”.
  • BOTs are wrapped in YAML Definitions and exposed as Service APIs – this allows BOTs to be involved from Third-Party Apps (like ServiceNow).
  • BOTs are “Managed & Supervised Runs” – BOT Orchestrator manages the lifecycle to bring in Security, Compliance, Error Handling and Insights.
  • BOTs have a Lifecycle for Intelligent Maturity.
  • Open Source Platform that can be extended and integrated with existing tools on a journey to achieve AIOps Maturity.
  • Very deeply embedded with ServiceNow and leverages data and transaction integration in a bi-directional way.

The following image explains the RLCatalyst BOTs Service Management Architecture.

How does RLCatalyst BOTs Service Management work?
Integrating your ServiceNow instance with RLCatalyst BOTs Server helps you to publish self-service driven automation to your ServiceNow Service Portal without the need for custom coding or form design. Your users can order items from the Service Catalog which are then fulfilled by BOTs while maintaining record of the transactions in ServiceNow via Service Requests.

The ServiceNow administrator first downloads the scoped application and installs it in her ServiceNow instance. The application can be deployed from the Github repository provided by Relevance Lab. In the near future, this application will also be available from the ServiceNow Application Store.

Once installed, the application is configured by the ServiceNow Administrator. The person will fill the “BOTs Server Configuration” form. The required parameters are BOTs Server URL, Server Name, Is Default, Username and Password. This information is stored in the ServiceNow instance and is then used to discover and publish BOTs from the RLCatalyst BOTs Server.

The application administrator clicks on the Discover BOTs screen to retrieve the list of latest BOTs available on the BOTs Server. Once this list is displayed, the administrator can choose the BOTs person wants to publish and select the kind of workflow person wants to associate with that BOT (none, single or multi-level approvals). Then person clicks on the Publish button on doing which the BOTs are published to the Service Portal along with all the Forms associated with the BOT for input.

End-users can then use the self-service Catalog items to request fulfilment by BOTs.

What is the standard library of RLCatalyst BOTs available with this solution?
RLCatalyst provides a library of 100+ BOTs for common Service Management tickets and can help achieve up to 30-50% automation with out-of-the-box functionality across multiple functionalities as explained in diagram below.

  • User Onboarding and Offboarding.
  • Cloud Management.
  • DevOps.
  • Notification Services.
  • Asset Management.
  • Software and Applications Access Management.
  • Monitoring and Remediation.
  • Infrastructure Provisioning with integration to AWS Service Catalog.

Summary of Solution benefits
The RLCatalyst BOTs Service Management connector is providing an enterprise wide automation solution integrating ServiceNow to Hybrid Cloud assets with an ability to have self-service models. The automation of Asset and Service requests provides significant productivity gains for enterprises and in our own experience has resulted in achieving 10 FTE productivity, 70% automation of inbound requests and more than US $500K of annual savings on operations costs (including reduced headcount), ITSM license costs, Cloud assets optimized usage with compliance and 50% efficiency gains on internal IT Workflows.

Following are some key blogs with details of solutions addressed with this RLCatalyst BSM connector.


For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Feature Blog, Featured

Relevance Lab in partnership with AWS has launched a new solution to help self-service collaboration for Scientific Computing using AWS Cloud resources. Scientific Research is enabling new innovations and discoveries in multiple fields to make human life better. There are the large and complex programs funded by governments, public sector and private organizations. Every higher education institution and universities globally have specialized focus on Research Programs.

Some research institutions already use an existing ITSM Portal for self-service and our previous blog explains the solution integrated with such popular ITSM tools like ServiceNow – AWS Research Workbench. In this blog we cover the common scenario of research institutions for an open source based custom self-service platform that is needed to integrate a community within the institution and also with outside organizations in a federated manner.

Why do we need an RLCatalyst Research Gateway cloud solution?
Research is a specialized field with the community focussing on using “Science” to find common solutions to human problems in areas of Health and Medicine, Space, Earth etc. The need to drive frictionless research across geographies requires ability to focus on “Science” while addressing the specific needs of People-Programs-Resources interactions. The “RLCatalyst Research Gateway” acts as a bridge provisioning seamless and secure interactions, access to programs and budgets with ability to consume and manage lifecycle of research related computational and data resources.


PEOPLE Specialized group of Researchers collaborating across organizations, disciplines and countries with open collaboration needs.
PROGRAMS Specialized research programs, funding, grants, governance, reporting, publishing outcomes etc.
RESOURCES High Performance Computing resources, large data for studies, analytics models, data security and privacy considerations, sharing and collaboration, Common Federated Identity and Access Management etc.

The key requirements for Cloud based RLCatalyst Research Gateway are following.

  • Standard Research Needs
    • Roles, Workflows, Research Tools, Governance, Access and Security, Integration.
    • People-Programs-Resources Interactions.
    • Intramural and Extramural Research.
    • Infrastructure, Applications, Data, and Analytics.
  • Built on Cloud
    • Easy to deploy, consume, manage and extend – should align with existing infrastructure, applications, and cloud governance.
    • Leverage AWS Research products.
  • Leverage Open-Source with an enterprise support model
    • Supports both Self-hosting and Managed Hosting options.
    • Cost effective – pre-built IP and packaged service offerings.

The diagram below explains the RLCatalyst Research Gateway cloud solution. The solution provides researchers with one-click access to collaborative computing environments operating across teams, research institutions, and datasets while enabling internal IT stakeholders to provide standard computing resources based on a Service Catalog, manage, monitor, and control spending, apply security best practices, and comply with corporate governance.

Using the RLCatalyst Research Gateway cloud solution
The basic setup models a research institution or university with need to have support for different research departments, principal investigators, researchers, project catalogs and budgets. The diagram below explains a typical setup of the key stakeholders and different entities inside the RLCatalyst Research Gateway.

  • Research Organization/Institution.
  • Research Departments.
  • Principal Investigators.
  • Researchers.
  • Site Administrator.
  • Project Catalog of Cloud Products.
  • Budget for Project and Researcher.

RLCatalyst Research Gateway solution map
There are three key role based functionality built into the RLCatalyst Research Gateway solution related to following.

  • Researcher Workflows.
  • Principal Investigator Workflows.
  • Site Administrator Workflows.

The RLCatalyst Research Gateway solution components
A number of AWS components have been used to build the RLCatalyst Research Gateway solution to make it easier for the research community to focus on science and not the headaches of managing cloud infrastructure. At the same time existing investments of Research Institutions on AWS are leveraged and best practices integrated without need for custom or proprietary solutions. Following is a sample list of AWS products used in RLCatalyst Research Gateway and more products can be easily integrated.

  • AWS Service Catalog – Core products available for Research Consumption.
    • AWS SageMaker Notebook.
    • AWS EC2 Instances.
    • AWS S3 Buckets.
    • AWS Workspaces.
    • AWS RDS Data Store.
    • AWS HPC high performance computing.
    • AWS EMR.
  • AWS Cognito for Access and Control.
  • AWS Control Tower for Account management and governance.
  • AWS Cost Explorer and Billing for Project and Researcher budget tracking.
  • AWS SNS and AWS Eventbridge for Notification Services.
  • AWS Cloudformation for template designs.
  • AWS Lambda for Serverless computing.

The RLCatalyst Research Gateway solution created in partnership with AWS is available in an Open source model with Enterprise Support options. The solution can be deployed in Self-hosted Cloud or used in a Managed Hosting model with customizations options available as needed.

For a demo video please click here

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, AIOps Blog, Blog, Feature Blog, Featured

With growing Cloud adoption and need for Intelligent Infrastructure Automation, larger enterprises are adopting Hybrid-Cloud solutions like Terraform. Creating reusable templates for provisioning of complex infrastructure setups in Cloud and Data Centers, Orchestrating Cloud management with self-service Portals and full lifecycle monitoring of the assets can provide a flexible, reusable and scalable enterprise cloud solutions.
Relevance Lab is a Hashicorp partner with multiple successful enterprise infrastructure automation implementations using Terraform covering AWS, Azure, VMWare and GCP with 5000+ nodes setups.


Solution Highlights:

  • Our solution allows you to rebuild stacks using automation completely. It is instrumental in provisioning newer environments with minimal code changes.
  • It has the built in ability to replicate stacks across multiple regions with minimal code changes.
  • Capability to add / remove new instances to components with few code changes.
  • Simple code structure. Any new infrastructure needs can be easily provisioned by modifying the variables.
  • Ability to modify instances such as Volumes, Instance Sizing, AMI changes, Security groups with minimal code changes.

Tools & Technologies:
Terraform from HashiCorp has emerged as the best Infrastructure automation tool. Terraform helps in building, changing and versioning of the infrastructure efficiently. Terraform is a declarative and, with the help of configuration files, we can describe the components to be built and managed across the entire datacenter.


Design Considerations:
Below mentioned are some of our design considerations based on standard practices in infrastructure automation. These structures have helped us gain flexibility and ease in scaling stacks based on demand.

  1. Code Repo Structure:
  2. Each AWS stack is a separated GITHUB Repo while Terraform modules are a shared repo.

    • It makes the code design structure very scalable to create newer AWS stacks in a different region or re-build stacks in case of a disaster or scale more resources based on traffic/load.
    • A separate repo helps in maintaining isolation as each stack would have varied footprints of the resources.
    • It helps in security and compliance as audits can be performed against a specific stack.

  3. Segmentation:
  4. The below design model is showing the Automation build-out for each AWS account. Each layer is well segmented and can be easily scaled based on the needs. Making any specific change to each of the layers is easier.


  5. Integration:
  6. Fully integrated with GITHUB for Continuous Integration and Continuous Deployment.

    • Each change is performed on a branch which is merged via a Pull Request.
    • Each Pull request is reviewed and verified and combined with the Master branch.
    • Infrastructure changes are thoroughly tested during the PLAN stage and then terraform APPLY.

  7. Code reusability:
  8. Modules provide an easy way to abstract, common blocks of configuration into reusable infrastructure elements.

    • Modules can help with this as they significantly reduce duplication, enable isolation, and enhance testability.

Benefits:

  • Provides the ability to spin up an entire environment in minutes.
  • It reduces time to rollout complex network and storage changes to less than a few minutes.
  • Infrastructure is managed as a code, and all changes are tested well; resulting in fewer outages because of infra configuration changes.
  • It is easy to operate and maintain because terraform uses a declarative language.
  • Infra is Idempotent, and a state-based desired system.

Conclusion:
Using design best practices of terraform, enterprises can quickly build and manage infrastructure, which is highly scalable and efficient. Further, this automation can be hooked to a Jenkins pipeline project for automated code pushes for infra changes which can be tied to a standard release and deployment process.


  • Leveraging Chef for configuration management and managing all the application software installation and configurations via Chef cookbooks and recipes.
  • Leveraging InSpec for auditing the properties of the AWS resources.

There are few other additions which could be introduced to this design to do a tight bonding between security and compliance policies and infrastructure as code. That may be achieved by integrating with Sentinel It helps in prevention of infra provisioning if there are deviations in the Infra code which do not adhere to the security policies. Sentinel helps us in building a fine-grained, condition-based policy framework.


For more details or enquires, please write to marketing@relevancelab.com


0