Your address will show here +12 34 56 78
2020 Blog, Blog, Feature Blog, Featured

Relevance Lab in partnership with ServiceNow and AWS has launched a new solution (ServiceNow scoped application) to consume Intelligent Automation BOTs from within ServiceNow self-service Portal with 1-Click automation of assets and service requests using the Information Technology Service Management (ITSM) governance framework. This RLCatalyst BOTs Service Management (RLCatalyst BSM) connector is available for private preview and will very soon be also available on ServiceNow Marketplace. It integrates with ServiceNow self-service Portal and Service Catalog to dynamically publish an enterprise library of BOTs for achieving end to end automation across infrastructure, applications, service delivery and Workflows. This solution builds on the concept of “Automation Service Bus” architecture explained in a blog earlier.

The biggest benefit of this solution is a transition to a “touchless” model for automation within ServiceNow Self Service Portal with a dynamic sync of enterprise automation libraries. It provides an ability to add new automation without a need to build custom forms or workflows inside ServiceNow. This makes creation, publishing and lifecycle management of BOTs automation within the existing governance models of ITSM and Cloud frictionless leading to faster rollout and ROI. Customers adopting this solution can optimize ServiceNow and Cloud operations costs significantly with self-service models. A typical large enterprise Service Desk team gets a huge volume of inbound tickets on a daily basis and more than 50% of these can be re-routed to self-service requests with a proper design of service catalog, automation and user training. With every ticket fulfilment cost (normally US $5-7) now handled by BOTs there is a significant and measurable ROI along with faster fulfilment, better user experience and system based compliance that helps in audits.

Following are the key highlights of this solution

  • Rendering of RLCatalyst BOTs under ServiceNow Service Catalog for 1-Click order and automation with built in workflow approval models.
  • Ability of ServiceNow Self Service users to order any Automated Service Request from this standard catalog covering common workflows like.
    • Password Reset Requests.
    • User Onboarding.
    • User Offboarding.
    • AD/SSO/IDAM integration.
    • Access and Control for apps, tools, and data.
    • G-Suite/O365/Exchange Workflows.
    • Installation of new software.
    • Any standard service request made available by enterprise IT in a standard catalog.
  • Security and approvals integrated with existing ServiceNow and AD user profiles.
  • Ability to involve any BOT from the RLCatalyst BOTs server that provides integration to agent base, agent-less, Lambda function, scripts, API based, UI based automation functionality.
  • A pre-built library of 100+ BOTs provided as out-of-the-box solution.

As a complementary solution to AWS Service Management connector customers can achieve complete automation for their Asset and Service Requests with Secure Governance. For assets being consumed on non AWS footprints like VMWare, Azure, On-prem systems, the solution supports automation with Terraform templates to address hybrid-cloud platforms.

What are BOTs?
Any Automation functionality dealing with common DevOps, TechOps, ServiceOps, SecurityOps and BusinessOps. BOTs follow an Intelligent Automation maturity model as explained in this blog earlier.

  • BOTs Intelligent Maturity Model
    • Task Automation.
    • Process Automation.
    • Decisioning Driven Automation.
    • AI/ML Based Automation.

BOTs vs Traditional Automation

  • BOTs are reusable – separation of Data and Logic.
  • BOTs support multiple models – AWS Lambda Functions, Scripts, Agent/Agentless, UIBOTs etc with better coverage.
  • BOTs are managed in a Code repository with Config Management (Git Repo) – this allows the changes to be “Managed” vs “Unmanaged scripts”.
  • BOTs are wrapped in YAML Definitions and exposed as Service APIs – this allows BOTs to be involved from Third-Party Apps (like ServiceNow).
  • BOTs are “Managed & Supervised Runs” – BOT Orchestrator manages the lifecycle to bring in Security, Compliance, Error Handling and Insights.
  • BOTs have a Lifecycle for Intelligent Maturity.
  • Open Source Platform that can be extended and integrated with existing tools on a journey to achieve AIOps Maturity.
  • Very deeply embedded with ServiceNow and leverages data and transaction integration in a bi-directional way.

The following image explains the RLCatalyst BOTs Service Management Architecture.

How does RLCatalyst BOTs Service Management work?
Integrating your ServiceNow instance with RLCatalyst BOTs Server helps you to publish self-service driven automation to your ServiceNow Service Portal without the need for custom coding or form design. Your users can order items from the Service Catalog which are then fulfilled by BOTs while maintaining record of the transactions in ServiceNow via Service Requests.

The ServiceNow administrator first downloads the scoped application and installs it in her ServiceNow instance. The application can be deployed from the Github repository provided by Relevance Lab. In the near future, this application will also be available from the ServiceNow Application Store.

Once installed, the application is configured by the ServiceNow Administrator. The person will fill the “BOTs Server Configuration” form. The required parameters are BOTs Server URL, Server Name, Is Default, Username and Password. This information is stored in the ServiceNow instance and is then used to discover and publish BOTs from the RLCatalyst BOTs Server.

The application administrator clicks on the Discover BOTs screen to retrieve the list of latest BOTs available on the BOTs Server. Once this list is displayed, the administrator can choose the BOTs person wants to publish and select the kind of workflow person wants to associate with that BOT (none, single or multi-level approvals). Then person clicks on the Publish button on doing which the BOTs are published to the Service Portal along with all the Forms associated with the BOT for input.

End-users can then use the self-service Catalog items to request fulfilment by BOTs.

What is the standard library of RLCatalyst BOTs available with this solution?
RLCatalyst provides a library of 100+ BOTs for common Service Management tickets and can help achieve up to 30-50% automation with out-of-the-box functionality across multiple functionalities as explained in diagram below.

  • User Onboarding and Offboarding.
  • Cloud Management.
  • DevOps.
  • Notification Services.
  • Asset Management.
  • Software and Applications Access Management.
  • Monitoring and Remediation.
  • Infrastructure Provisioning with integration to AWS Service Catalog.

Summary of Solution benefits
The RLCatalyst BOTs Service Management connector is providing an enterprise wide automation solution integrating ServiceNow to Hybrid Cloud assets with an ability to have self-service models. The automation of Asset and Service requests provides significant productivity gains for enterprises and in our own experience has resulted in achieving 10 FTE productivity, 70% automation of inbound requests and more than US $500K of annual savings on operations costs (including reduced headcount), ITSM license costs, Cloud assets optimized usage with compliance and 50% efficiency gains on internal IT Workflows.

Following are some key blogs with details of solutions addressed with this RLCatalyst BSM connector.


For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Feature Blog, Featured

Relevance Lab in partnership with AWS has launched a new solution to help self-service collaboration for Scientific Computing using AWS Cloud resources. Scientific Research is enabling new innovations and discoveries in multiple fields to make human life better. There are the large and complex programs funded by governments, public sector and private organizations. Every higher education institution and universities globally have specialized focus on Research Programs.

Some research institutions already use an existing ITSM Portal for self-service and our previous blog explains the solution integrated with such popular ITSM tools like ServiceNow – AWS Research Workbench. In this blog we cover the common scenario of research institutions for an open source based custom self-service platform that is needed to integrate a community within the institution and also with outside organizations in a federated manner.

Why do we need an RLCatalyst Research Gateway cloud solution?
Research is a specialized field with the community focussing on using “Science” to find common solutions to human problems in areas of Health and Medicine, Space, Earth etc. The need to drive frictionless research across geographies requires ability to focus on “Science” while addressing the specific needs of People-Programs-Resources interactions. The “RLCatalyst Research Gateway” acts as a bridge provisioning seamless and secure interactions, access to programs and budgets with ability to consume and manage lifecycle of research related computational and data resources.


PEOPLE Specialized group of Researchers collaborating across organizations, disciplines and countries with open collaboration needs.
PROGRAMS Specialized research programs, funding, grants, governance, reporting, publishing outcomes etc.
RESOURCES High Performance Computing resources, large data for studies, analytics models, data security and privacy considerations, sharing and collaboration, Common Federated Identity and Access Management etc.

The key requirements for Cloud based RLCatalyst Research Gateway are following.

  • Standard Research Needs
    • Roles, Workflows, Research Tools, Governance, Access and Security, Integration.
    • People-Programs-Resources Interactions.
    • Intramural and Extramural Research.
    • Infrastructure, Applications, Data, and Analytics.
  • Built on Cloud
    • Easy to deploy, consume, manage and extend – should align with existing infrastructure, applications, and cloud governance.
    • Leverage AWS Research products.
  • Leverage Open-Source with an enterprise support model
    • Supports both Self-hosting and Managed Hosting options.
    • Cost effective – pre-built IP and packaged service offerings.

The diagram below explains the RLCatalyst Research Gateway cloud solution. The solution provides researchers with one-click access to collaborative computing environments operating across teams, research institutions, and datasets while enabling internal IT stakeholders to provide standard computing resources based on a Service Catalog, manage, monitor, and control spending, apply security best practices, and comply with corporate governance.

Using the RLCatalyst Research Gateway cloud solution
The basic setup models a research institution or university with need to have support for different research departments, principal investigators, researchers, project catalogs and budgets. The diagram below explains a typical setup of the key stakeholders and different entities inside the RLCatalyst Research Gateway.

  • Research Organization/Institution.
  • Research Departments.
  • Principal Investigators.
  • Researchers.
  • Site Administrator.
  • Project Catalog of Cloud Products.
  • Budget for Project and Researcher.

RLCatalyst Research Gateway solution map
There are three key role based functionality built into the RLCatalyst Research Gateway solution related to following.

  • Researcher Workflows.
  • Principal Investigator Workflows.
  • Site Administrator Workflows.

The RLCatalyst Research Gateway solution components
A number of AWS components have been used to build the RLCatalyst Research Gateway solution to make it easier for the research community to focus on science and not the headaches of managing cloud infrastructure. At the same time existing investments of Research Institutions on AWS are leveraged and best practices integrated without need for custom or proprietary solutions. Following is a sample list of AWS products used in RLCatalyst Research Gateway and more products can be easily integrated.

  • AWS Service Catalog – Core products available for Research Consumption.
    • AWS SageMaker Notebook.
    • AWS EC2 Instances.
    • AWS S3 Buckets.
    • AWS Workspaces.
    • AWS RDS Data Store.
    • AWS HPC high performance computing.
    • AWS EMR.
  • AWS Cognito for Access and Control.
  • AWS Control Tower for Account management and governance.
  • AWS Cost Explorer and Billing for Project and Researcher budget tracking.
  • AWS SNS and AWS Eventbridge for Notification Services.
  • AWS Cloudformation for template designs.
  • AWS Lambda for Serverless computing.

The RLCatalyst Research Gateway solution created in partnership with AWS is available in an Open source model with Enterprise Support options. The solution can be deployed in Self-hosted Cloud or used in a Managed Hosting model with customizations options available as needed.

For a demo video please click here

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Cloud Blog, Featured

Based on AWS recommended best practices, this blog articulates governance and management at scale for customers on cloud security implementation covering the following themes

  • Designing Governance at Scale
  • Governance Automation
  • Preventive Controls
  • Detective Controls
  • Bringing it all together

Need for a matured and effective Cloud Security Governance
To achieve agility, compliance and security customers cannot rely on the manual processes and hence automation plays a key role. This mandates the need for an integrated model called “Governance at Scale” which focuses on Account Management, Security, Compliance Automation, Budget and Cost Management. This model help customers to be on fast track, while ensuring the workloads meet security and compliance requirements. Governance at Scale is an orchestration framework which includes enablement, provisioning and operations.


  • Account Management: Governance at Scale processes streamline account management across multiple AWS accounts and workloads in an organization through centralization, standardization and automation of account maintenance. This can be achieved through policy automation, identity federation and account automation.

  • Security and Compliance Automation: Governance at Scale practices consists of three main goals
    • Identity and Access Automation: Customers can access their workloads based on their roles privileges, as defined by the organizations policies. Access to new services can be added to an OU level and the changes will apply across all cloud accounts on that level.
    • Security Automation: To maintain a secure position at scale, security tasks and compliance assessments also require automation. Automation helps in reduced implementation efforts, as templates ensure that services and projects are secure and compliant by default. Customers can also be more responsive when a policy violation occurs.
    • Policy Enforcement: AWS guidance to achieve Governance at Scale helps you to achieve policy enforcement on AWS Regions, AWS services and resource configurations. Policies enforcement happens at different levels like Region, services and resource configurations and also at an organizational level or the resource level. Enforcement is based on roles, responsibilities and compliance regulations (such as HIPAA, FedRAMP and PCI/DSS).

  • Budget and Cost Management: This framework helps Organizations to proactively make decisions on budget controls and allocation across their organizations and primarily consists of budget planning and enforcement.
    • Budget Planning: This allows allocation and subdivide the available budget from a given funding source appropriately across the company by the financial owners. Financial dashboards provide real-time insights to the decision makers over the lifetime of the funding source.
    • Budget Enforcement: Budget enforcement can happen at each layer, department or project in an organization as these can have different budgetary needs and limits. The governance framework allows the organization for budget assignment and defines the threshold, while monitoring spending in real time and can proactively notify the relevant stakeholders and trigger enforcement actions.

Some of this Intelligent Automation includes

  • Restricting the use of AWS resources to those that cost less than a specified price.
  • Throttle new resource provisioning.
  • Shut down, end or deprovision AWS resources after archiving configurations and data for future use.

Implementing Governance at Scale with Ideal Landing Zone architecture


Key Process and Services to implement Governance at Scale Framework

AWS Control Tower: It is a native service used for setup and governing a secure, compliant, multi-account AWS environment, automated using AWS best practices blueprints. It’s multi-account structure enables aggregated centralised logging, monitoring and operations.

  • Establish and Enable Guardrails: AWS Control Tower includes guardrails, which are high-level policies that provide constant governance. It allows you to adopt original best practices on security across the AWS environment managed by Control Tower.
  • Automate Compliant Account Provisioning: Automate account provision workflow using Account Factory.
  • Centralize Identity and Access: By using AWS SSO, the service can centralize access and identity management which follows the standard best practices.
  • Log Archive Account: The log archive centralizes logs and provides a single source of truth for all the account activities. The account works as a repository for API activity logs and resource configurations from all accounts in the landing zone. It contains the centralized logging for AWS CloudTrail and AWS Config.
  • Audit Account: The audit account is a restricted account. It is designed to provide security and compliance teams read and write access to all accounts in your landing zone. It can be a main account for security services such as Amazon GuardDuty and AWS Security Hub.

Governance Lifecycle with Services: An integrated model covering AWS Config, AWS Systems Manager, Amazon GuardDuty and AWS Security Hub.

These services work together and play a crucial role in the Governance at Scale framework. Together, they allow your customers to

  • Define security rules and compliance requirements.
  • Monitor infrastructure against the rules and requirements.
  • Detect violations.
  • Get notifications in real time.
  • Take action in an effective and rapid manner.

AWS Config: This enables customers to assess, audit and evaluate their AWS configurations in real-time. It also monitors and records AWS resource configurations. It also automates the evaluation of recorded configurations against desired configurations.

AWS Systems Manager: This gives customers visibility with a unified user interface and allows them to control their infrastructure on AWS by automating operational tasks. With AWS Systems Manager, customers can

  • Group resources by application.
  • View operational data for monitoring and troubleshooting and take action on groups of resources.
  • Streamlines resource and application management.
  • Shortens the time to detect and resolve operational issues.
  • Simplifies operations and management of the infrastructure – securely at scale.

Amazon GuardDuty: It protects AWS accounts, workloads and data with intelligent-threat detection, monitoring of malicious activity, unauthorized behavior to protect AWS accounts and the workloads. It uses machine learning, anomaly detection and integrated threat intelligence to identify and prioritize potential threats.
Customers enable GuardDuty from the AWS Management Console, where it analyzes billions of events across multiple AWS data sources, such as AWS CloudTrail Event logs, Amazon VPC flow log and DNS logs. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable.

AWS Security Hub: This is the compliance and security center for AWS customers. Security Hub allows customers to centrally view and manage security alerts and automate security checks.
Security Hub automatically runs the account-level configuration and security checks based on AWS best practices and open standards. It consolidates the security findings across accounts and provider products and displays results on the Security Hub console. It also supports integration with Amazon CloudWatch Events. To automate remediation of specific findings, customers can define custom actions to take when a finding is received.


AWS Products Used


With AWS management and governance services, customers can improve their governance control and fast track their business objectives. However, solving these challenges are not straight and simple as many of the customers rely on a traditional IT management process which is manual and not scalable. Also, with lack of clarity on account management without clearly defined processes, they end up with multiple accounts provisioning and tracking becomes inefficient. This can also increase their security and financial risks. In some cases, due to these challenges, customers rely on third party tools or solutions which can further complicate and increase operational challenges.

Relevance Lab can help organizations to build or migrate existing accounts to a secured, compliant, multi account AWS environment enabled with automation to increase both operational and cost efficiency. The transition to this matured Governance at Scale framework can be implemented in four weeks using our specialised competencies, RLCatalyst automation framework and the Governance at Scale handbook.

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Featured

Nobody likes remembering credentials. They appear like exerting plenty of pressure on the memory. What is worse is many use identical username and password, no matter the application they are using. Single Sign-On (SSO) could be a method of authentication that permits websites to use other trustworthy sites to verify users. Single Sign-On allows a user to log into any independent application with one ID and password. Verification of user identity is very important when it involves knowing which permissions a user will have. OKTA is a leading IDAM application that our client uses for managing access that blends user identity management solutions with SSO solutions. SPECTRA, an analytical platform which is supported by open source technology has recently been on boarded for the client who is into publishing space. The client has integrated all their applications under one roof of IDAM (OKTA). SPECTRA also follows the same route.

What is SPECTRA?
SPECTRA is a Big Data Analytics platform from Relevance Lab, which has the ability to consume, store and process structured and unstructured data. It also can cleanse and integrate this data into one unique platform. It depicts data intelligently and presents it using an intuitive visualization layer so that business users can get actionable business insights across various parameters. Coupled with an OCR engine, it also provides Google-like search capabilities across legacy unstructured and structured data.


SAML
In the modern era of computing, security is an essential feature when it comes to enterprise applications. Security Assertion Markup Language (SAML) is used to provide a single point of authentication at a secure identity provider. This feature highlights that user credentials could not leave the firewall boundary. SAML is used to assert the identity to others.

SAML SSO works by transferring the user’s identity from one place (OKTA) to another service provider(SPECTRA). The application identifies the user’s origin (By First Name, Last Name & Network Email ID) and redirects the user back to the identity provider (OKTA), asking for authentication to enter the IdP registered credentials.

See the high level architectural diagram below.


Integrating with OKTA Idam Platform using SAML
Identity Provider (IdP) is an entity that provides the identities, including the flexibility to authenticate a user-agent. The Identity Provider also contains the additional user profile information like name, last name, job code, signal, address, and so on. Several service providers may require a simple user profile, while others may require a complex set of user data (job code, department, address, location, manager, etc).

See the diagram below which show Spectra and SAML Integration.


SAML Request, also referred to as an authentication request, is generated by the SPECTRA (Service Provider) to “request” an authentication through IdP to User-Agent. SAML Response is generated by the Identity Provider. It contains the accurate assertion of the authenticated user. Additionally, a SAML Response also contains additional information, like user profile information and group/role information, betting on what the Service Provider can support.

See the picture below which shows SAML Integration flow.


SPECTRA platform initiates sign-in describes the SAML sign-in flow when initiated by the Service Provider. This is triggered when the end-user tries to access a resource or log-in directly on the Service Provider side, like when the user-agent (browser) tries to access a protected resource on the Service Provider side.

An Identity Provider (Idp) initiates sign-in depicts the SAML sign-in request created by the Identity Provider. The Idp initiates a SAML Response that is redirected to the Service Provider to confirm the user’s identity, rather than the SAML flow being triggered by a redirection from the SPECTRA. The Service Provider not once directly interacts with the Identity Provider. User-Agent (browser) functions as the agent to carry out all the redirections. The Service Provider must know which Idp to pass on to the MySQL database. The Service Provider must authenticate the user until the SAML assertion comes back from the Idp.

An Identity Provider can initiate an authentication flow. The SAML authentication flow is asynchronous. The Service Provider interacts with Idp and redirects the request to the complete flow. This creates a situation where the Service Provider will not maintain any state of authentication requests. The response that Service Provider gets from an Identity Provider must contain all the required information. SPECTRA validate the OKTA user information in MySQL DB and transfer the assigned user roles in the application. User can view the assigned roles within the application.

SPECTRA, a product from Relevance Lab offers great flexibility as an analytical platform that has ability to consume, store and process structured and unstructured data. It can be integrated with various Identity Access Management platforms like OneLogin, AuthO, Ping Identity, etc using SAML.

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Featured

With growing use of AWS Cloud across different industry segments for frictionless business, the use case of “Enabling Scientific Research” leveraging Cloud has unique benefits. Research is a very specialized field driven by a community of “Researchers” who want to focus on “Discovering Science than Servers”. Researchers day-to-day work requires processing data, collaborating online, and trying to maintain labs remotely. There is a need to democratize research computing so that everyone can use that easily.

Working closely with our AWS partners, Relevance Lab is creating an AWS “Research Workbench” powered by Intelligent Automation that can enable use of Cloud by Research Institutions and Researcher’s a frictionless manner.

Core functionality needed

  • Basic need of High End and Research focused enterprises to be able to leverage AWS products seamlessly for research oriented business needs.
  • Specialized roles – Principle Investigator, Researchers under one or many Research projects with different funding sources (Public and private).
  • Ability to collaborate with Intramural and Extramural researchers.
  • Specialized tools and software needs for an Analytics solution – AWS SageMaker, EMR, AI/ML, HPC, data security, secure Workspaces, large data sets sharing capability etc.
  • Need for proper AWS Management & Governance with the ability to manage Self-Service (ITSM or custom portals) based lifecycle management (Provisioning, Managing, De-provisioning of users and assets).
  • Proper cost and budget management and controls.

Additional challenges for Research Projects

  • Massive Volumes of Data.
  • Cross functional research teams.
  • Research data management with compliance and security considerations.
  • Leveraging new techniques of AI/ML, serverless computing, spot instances for HPC etc.

Scientific community has to adapt these challenges and AWS Cloud provides the platform for collaboration, on-demand resources and scale in a secure and compliant manner. Bringing together relevant AWS tools to create a bundle of Research Workbench makes this easier.

Catering to research needs special attention to the use-cases that may come up. For example a researcher may be working on a data science project using AWS Sagemaker notebooks and a large volume of research data in an S3 bucket. Given the sensitive nature of data, the access to the bucket may need to be secured within the organization and accessible only from within the specific network. Also a researcher may only need to access his own data and computing resources. We have developed a security model around the same which addresses such needs. The researchers can only access the resources from a Workspace created for them for that purpose.


To cater to the above the solution encompasses a “Research Portal” for user interactions and a specialized “Research Workbench” for collaborating on tools and data.

  • Research Portal – Managed with existing ITSM Self Service Portals like ServiceNow.
  • Research Workbench – Created by using AWS standard products, Service Catalog and Control Tower to enforce governance.

The above features allow creating and managing the lifecycle of a Research within an enterprise by leveraging investments in existing ITSM Portal and providing a seamless experience for AWS consumption. The solution leverages existing best practices of AWS Control services with Control Tower, Service Catalog, secure Access and automated provisioning/deprovisioning of resources. A critical part of such a Research Portal is proper cost management and tracking of research budgets and consumption against the same.

The following diagram explains the building blocks of a Research Workbench solution deployed with integration to ITSM Platforms like ServiceNow and using the AWS Service Management connector.


The reference deployment architecture using AWS Control Tower (CT) best practices is explained below. The access is controlled using AWS Simple AD and IAM roles.


The entire cycle of onboarding new researchers and provisioning assets for their research is automated using RLCatalyst BOTs solution with 1-Click deployment while still following the ITSM best practices as explained below.


Research Workbench Features
Following is a sample list of features planned (this is an indicative list only and not comprehensive)


Summary of Solution benefits
Based on the pre-built functionality of ServiceNow Self Service Portal, AWS standard products and our custom solutions are integrating the two platforms with a specialized research focussed use case. The following benefits includes:


  • Quick start solution targeting Academic and Research Institutions – New and existing AWS customers.
  • Existing customers with ITSM investments.
    • Using existing ITSM platforms (ServiceNow, Jira Service Desk, Freshservice).
  • Focusing on primarily “Built on AWS Solution” with standard products.
    • AWS Control Tower, Service Catalog, ITSM Connector, Sagemaker, Workspaces, EC2, S3, RDS, EMR etc.
  • Deployment options.
    • Per customer Research Solution deployment (using customer Cloud and ITSM resources).
    • Hosted solution offered to customers with (Managed Services based Cloud and ITSM platforms).
  • RLCatalyst leveraged Solution(Automation, Service Portal, Observability and Cost Governance) add-ons.
  • Pre-built solution to address 80-90% standard needs with scope of some customer specific customizations.
  • Ability to on-board new customer in 3-4 weeks based on pre-built offering with agility and low onboarding costs.

For a demo video please click here

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Featured, ServiceOne

AWS provides a Service Management Connector for ServiceNow and Jira Service Desk end users to provision, manage and operate AWS resources securely via ITSM Portal. However, a similar solution does not exist for FreshService. The same maturity of end to end automation for Freshservice customers can be provided by using Relevance Lab’s RLCatalyst BOTs solution. It will provide an Automation Service Bus between ITSM tools and AWS Cloud assets.

Freshservice is an Intelligent Service Management platform, which comprises of all the essential modules like Incident Management, Problem Management, Change Management, Release Management, Project Management, Knowledge Management and Asset Management including Hardware, Software and Contracts. It also provides consolidated reports including analytics.

Many customers are adopting Freshservice as an ITSM cloud based solution and orchestrating self-service requests for organizations. One of the common automation needs is for User and Workspace onboarding and offboarding that involves integration with HR systems, AWS Service Catalog and AWS Control Tower for proper management and governance. Similarly using Infrastructure As Code model, organizations are using Cloud Formation based template models for complex workloads provisioning with 1-Click models.

The Freshservice workflow automator with RLCatalyst BOTs integration helps in automation of simple repetitive tasks like assignment of tickets to the right groups, and setup of multi-level approvals. It is a simple drag and drop interface which can help to automate most of the simple use cases. In addition, the webhook option allows automation of complex workflows or use cases by integrating with the right automation tools. In addition to this, the business rules for forms feature will enable you to describe conditional logic and actions to create complex dynamic forms.

The below diagram illustrates the Integration Architecture between FreshService, AWS and RLCatalyst.


Using the integrated solution, organizations can automate use cases related to both End User Computing (EUC) and other standard Server side workloads provisioning. Two common examples are :

  • User and Workspace Provisioning : Onboard a new user and request for an AWS workspace where the original request is generated by Workday/Taleo.
  • Server Infrastructure Provisioning, Application Deployment and Configuration Updates : Request for provisioning of a complex multi-node workload using Service Catalog item fulfilled with an AWS Cloud Formation template and post provisioning setup.

The below diagram illustrates the following EUC automation.


The steps to Onboard a new user and Workspace in an automated are as follows.

  • RLCatalyst enables Freshservice to create an Service Request(SR) using the file generated from Workday or Taleo.
  • Once an SR is created, the workflow automator of Freshservice triggers the approval workflow for either auto approval, cost based approval or role-based approval.
  • Based on the approval workflow defined, and successful execution of the same, the next step is to request RLCatalyst to trigger the onboarding workflow within RLCatalyst.
  • RLCatalyst, then enables the BOT 1for creation of a user in simple AD.
  • BOT 2 sends out a request for provision of AWS workspace, while the BOT3 looks for the status of the workspace creations.
  • Once the status is received on the successful provision by the BOT3, the workflow instructs the AWS SNS to send out a notification email to the end user with the workspace details and login credentials.
  • Finally, RLCatalyst sends a request back to Freshservice for the successful closure of the SR.
  • In case of failure of workspace provision, RLCatalyst will instruct Freshservice to create an Incident to check for the Root Cause Analysis(RCA).

Similarly, a user can request for a multi-node application stack deployment in AWS using Freshservice service catalog. The below diagram illustrates the following :


  • Create the infrastructure with multiple AWS resources (EC2, S3, RDS etc).
  • Deploy one or more applications on the instances created (Web Tier, App Tier, DB Tier).
  • Configure the application with the run-time information. e.g. DNS endpoint creation, bind the listening IP address of an application to the IP address of the instance created. Then update YAML files with environment variable values etc.
  • Deploy the monitoring agents like Infra health, App health, Log monitoring and Service Registry.
  • Setup network configurations like hosted zones, routes etc and setup security configurations like SSL certificates.

The multi-stage orchestration requires a workflow for state and context management during the lifecycle and this is provided by using RLCatalyst Workflow capabilities.

Relevance Lab is a solution partner of Freshservice. We assist the enterprises to adopt AWS Cloud with intelligent automation using RLCatalyst BOTs. Relevance Lab also offers a pre-integrated solution of ServiceOne with Freshservice.

For a demo video and for more details,  please click here.

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Cloud Blog, Featured, RLCatalyst Blog

The adoption of Cloud and DevOps has brought changes in large enterprises around the traditional management methodology of Infra, Middleware and Applications lifecycle. There is a continuous “tension” to achieve the right balance between “security + compliance” vs “agility + flexibility” between Operations and Development teams. For large enterprises with multiple business units and global operations and having distributed assets across multiple cloud providers, these issues are more complex. While there is no “silver bullet” that can solve all these issues, every enterprise needs a broad framework for achieving the right balance.

The broad framework is based on the following criteria:

  • IT teams predominantly define the infrastructure components like images, network designs, security policies, compliance guardrails, standard catalogs etc. based on the organization’s policies and requirements.
  • Application teams have the flexibility to order and consume these components and to manage post provisioning lifecycle specific to their needs.

The challenge being faced by larger enterprises using multiple cloud workloads is the lack of a common orchestration portal to enable application teams to have self-service requests and flexible workflows for managing workload configuration and application deployment lifecycle. The standard Cloud management portals from the major cloud providers have automated most of their internal provisioning processes, yet don’t provide customers system-specific solutions or do workload placement across various public and private clouds. In order to serve the needs of Application groups a portal is needed with following key functionalities.


  • The self-service portal is controlled via role-based access.
  • Standard catalog of items for Infrastructure Management.
  • Flexible workflow for creating a full lifecycle of configurations management.
  • Microservices-based building blocks for consuming “INFRASTRUCTURE AS A CODE” and manage post provisioning lifecycle.
  • Ability to monitor the end to end provisioning lifecycle with proper error handling and interventions when needed.
  • Governance and management post provisioning across multiple workloads and cloud services.

Relevance Lab has come up with a microservices-based automation solution which automates enterprise multi-cloud provisioning, pre and post, provisioning workflows, workload management, mandatory policies, configurations, and security controls. The end to end provisioning is automated and made seamless to the user by integrating with ServiceNow, Domain server, configuration servers and various cloud services. There are multiple microservices developed to handle each stage of the automation, making it highly flexible to extend to any cloud resources.

The building blocks of the framework are as shown here:


The IAAC which is maintained in a source code repository can have the cloud templates for a variety of resources.


Resource Platform Automated Process
Compute – VM/Server VMWare, AWS, Azure, GCP Automated provisioning of VMs and the backup VMs
Compute – DB Server VMWare, AWS, Azure, GCP Automated provisioning of the DB servers and Backup servers – Oracle, PostgresSQL, MSSQL, MySQL, SAP
Compute – HA and DR VMWare, AWS, Azure, GCP Automated provisioning of HA and DR servers
Compute – Application Stack AWS, Azure Automated Provisioning of Application stack using CFTs and ARM templates
Network – VPC AWS, Azure, GCP Automated provisioning of VPCs and subnets
Storage AWS, Azure, GCP Automated provisioning of S3 buckets or Blob storage
Storage – Gateways AWS Automated provisioning of storage gateways
DNS Server AWS, Azure Automated provisioning of DNS servers


Getting Started with Hybrid Cloud Automation – Our recommendations:

  • Generate standard cloud catalogue and create reusable automated workflows for processes such as approval and access control.
  • To optimize the management of resources, limit the number of blueprints. Specific features can be provisioned in a modular fashion in the base image.
  • Use configuration management tool like Chef/Puppet/Ansible to install various management agents.
  • Use “INFRASTRUCTURE AS A CODE” principle to provision infrastructure in an agile fashion. It needs tools like Github, Jenkins and any configuration management tool.

Benefits:

  • Significantly reduce the Operations cost by reducing the manual effort and proactive monitoring services using a single platform.
  • Reduced time to market for new cloud services by enabling a single-click deployment of cloud services.

For more details, please feel free to reach out to marketing@relevancelab.com


0

2020 Blog, Blog, Featured

ServiceNow is the dominant platform used by the organizations for IT Service Management. Organizations are using ServiceNow to build digital workflows and drive frictionless business. By leveraging DevOps & Automation, organizations can speed up software release and upgrade cycles.

With two major releases per year and quarterly updates of security patches, ServiceNow, has ensured that the new features are up to date as per the current industry trend and in compliance with the security mandates. However, to ensure that the customer gets the benefit of all these new features and security updates, organizations need to ensure they update to the latest version on a timely basis. The onus is on the individual organizations to ensure all their customizations are tested thoroughly after every upgrade or security update. Some of these upgrades can run into a few hundreds of test cases in an organization. Testing each of these features after every upgrade would typically take a few weeks to a few months based on the number of test cases. Many organizations are building custom applications on top of the ServiceNow platform, which adds burden on testing during upgrades.

ServiceNow has come out with an Automated Test Framework (ATF) from Istanbul version and above, which can automate testing and reduce the time taken from a few weeks to a few days. ATF is intended for Regression Testing and will ensure that your existing functionality remains intact. It enables no-code and low-code users to create automated test scenarios with ease. ATF reduces bottlenecks related to upgrades by reducing manual testing significantly, with minimal business impact and fasten development efficiency.

Benefits of ServiceNow ATF:

  • Free and Out of Box (OOB) feature without any add-on cost.
  • Fast track upgrade and development time by shifting manual testing to automated testing.
  • Validate all your customizations with every change/update/upgrade.
  • Reduction of manual errors due to consistency in the way the test cases are run.
  • Reusable and simple to use.
  • Testing can be executed along with development resulting in better quality output.

As shown in the above example, a test case with about 10 scenarios which would typically take 10 hours in a normal scenario would take only about an hour with ServiceNow ATF. This can be achieved creating and running batches of tests with automated test suites. Tests can be grouped together using test suites and this enables to run a group of test cases as a single job.

What is the Automated Test Framework?
ATF is a tool to streamline the upgrade and QA processes by building automated tests to check if software or configuration changes have potentially ‘broken’ any existing functionality. It also means developers would no longer be required to start operational activities like code refactoring to generate new test cases.


Customer Solution :
Relevance Lab has helped a large US based Digital learning company benefit from Intelligent Automation of their ServiceNow instance with their ATF. The customer uses ServiceNow extensively for ITSM, IT Asset Management, GRC, IT Operations Management, Vulnerability Remediation life cycle. Relevance Lab has implemented extensive automation of servicenow tickets (Incident, Problem, Change, Service Requests, Vulnerability Incident tickets, CMDB etc.) using their RLCatalyst product. The automation has implemented a number of customised forms, workflows and data schema which needs to be validated everytime a servicenow instance is upgraded. The normal cycle of upgrade would take about a week, but ensuring complete testing post upgrade took upto 3 weeks. To cut down on the cycle time and increase the quality, the entire upgrade cycle and associated functionalities were automated for testing using ServiceNow ATF. This helped in reduction of testing effort of 3 weeks for 400 test cases (104 flows) to 0.5 days using ServiceNow ATF with over 90% reduction in testing efforts and more accurate quality output.

The test cases varied across the below top categories

  • SAML SSO.
  • Okta Provision.
  • User Access Requests.
  • Bot Automation.
  • Asset Catalog.
  • Change Management.
  • Surveys.
  • Contract Management.
  • Asset Management.
  • Knowledge Management
  • Reports & Dashboards.
  • GRC & GDPR.

Relevance Lab is a partner of ServiceNow and helps organizations extract maximum ROI of the ServiceNow Platform. As part of this, we help organizations adopt the automated test reusable framework for all change requests, security updates or even major version upgrades.


For a demo of ServiceNow ATF, please click here.

For more details, please feel free to reach out to marketing@relevancelab.com

0

2020 Blog, Blog, Featured, ServiceOne

As enterprises adopt popular Agile and DevOps tools and solutions from Atlassian, it is essential to create an end to end automation pipeline covering ITSM workflows. Integration of Software Development Lifecycle (SDLC) tools, with cloud infrastructure platforms like AWS, can provide faster software deliveries with CI/CD, infrastructure automation and continuous production monitoring. RLCatalyst Intelligent Automation solutions complement the platform with an enterprise BOTs Automation solution and a mature end-to-end monitoring Command Centre solution. This blog details out an integrated solution between AWS Service Management Connector for Jira Service Desk enterprise workflows of User Onboarding + Asset Provisioning lifecycle.

The AWS Service Management Connector for Jira Service Desk (Jira SD) allows Jira Service Desk end-users to provision, manage, and operate AWS resources natively via Atlassian’s Jira Service Desk. Jira Service Desk Cloud module supports AWS Service Catalog Connector, and the Jira Service Desk Data Centre & Server module supports AWS Service Management Connector.

Jira SD admins can create and provide secured, governed AWS resources to end-users via service catalog, execute automation playbooks via AWS system manager and finally track the resources in a Config Item view powered by AWS config.

On downloading the connector from the Atlassian marketplace for no additional cost, you need to connect it with your AWS account, preferably governed by AWS Control Tower for enhanced security.

The AWS Service Catalog allows you to provision or terminate and centrally manage commonly deployed AWS resources like workspaces. AWS resources like workspaces can be pre-approved, provisioned or terminated based on approval.

Similarly, the AWS Service Management Connector allows Jira SD users to fulfil all the related operational activities. Some of them are listed below.


  • Migrate or Manage CloudWatch Agent.
  • Manage Amazon Inspector Agent.
  • Apply Ansible Playbooks or Chef Recipes on AWS managed instances.
  • Apply Patches from baseline.
  • Change the standby state of an EC2 instance in an auto-scaling group.
  • Attach an additional EBS Volume to the EC2 instance.
  • Attach IAM to an Instance.
  • Install or Uninstall a Distributor package.
  • Configure CloudTrail Logging.
  • Export Metrics and log files from your instances to Amazon CloudWatch.
  • Configure an instance to work with containers and Dockers.
  • Enable or disable live patching on Linux EC2 instances.
  • Configure S3 bucket logging.
  • Enable or disable Windows Updates.
  • Copy Snapshot created.
  • Create DynamoDB backup.
  • Create a new AMI from an EC2 instance.
  • Create an RDS snapshot for an RDS instance.
  • Create an incident in ServiceNow.


As shown in the above diagram, Relevance Lab helps enterprises already on AWS & Jira Service Desk, to integrate the two using AWS Service Management Connector. The integration enables a seamless process to create custom workflows like the creation of auto-approval, cost-based approval and role-based approval. Likewise, raise an incident in case of any failure of the resources provisioned or terminated and create change requests for every update of the workloads.

Benefits of AWS Service Management Connector for Jira Service Desk:

  • Free and Out of Box (OOB) feature without any add-on cost.
  • Support multiple AWS accounts and ensure governance through AWS CT.
  • Provision and Maintenance of AWS resources through one platform (Jira SD).
  • Easy to use by the IT admins without in-depth knowledge of AWS platform.
  • Multiple Portfolios and Service Catalogs for different departments within an Organization.
  • Represent Config Items in a tree structure.
  • Run most of the automated documents in AWS system manager through Jira SD.

The end to end orchestration of User Onboarding and Asset provisioning leverages the out of box features for AWS and Atlassian tools. However, for many real-world scenarios, the complex workflows need integration with other third-party tools like AD, OKTA, HR systems (Workday/Taleo) and compliance solutions. In situations that require more complex workflows and third-party integrations RLCatalyst BOTs solution is integrated with AWS and Atlassian solutions to provide lifecycle automation and observability post provisioning.


Conclusion:
Relevance Lab is a partner of AWS and a DevOps specialist company implementing Atlassian solutions. We help organizations adopt AWS Service Management Connector with ITSM tools like Jira Service Desk and ServiceNow. Integration of AWS Service Management Connector provides a common interface and ease for all L1 and L2 activities for the ITSM users to manage AWS resources. Our RLCatalyst based Intelligent Automation and Command Centre complement these solutions to bring in greater efficiencies.


Click here for a demo video.

For more details, please feel free to reach out to marketing@relevancelab.com

0

2020 Blog, Blog, BOTs Blog, DevOps Blog, Featured

Many organizations include the assessment of fraud risk as a component of the overall SOX risk assessment and Compliance plays a vital role.

The word SOX comes from the names of Senator Paul Sarbanes and Representative Michael G. Oxley who wrote this bill in response to several high-profile corporate scandals like Enron, WorldCom, and Tyco in United Station. The United States Congress passed the Sarbanes Oxley Act in 2002 and established rules to protect the public from the corporates, doing any fraudulent or following invalid practices. Primary objective was to increase transparency in the financial reporting by corporations and initialize a formalized system of checks and balances. Implementing SOX security controls help to protect the company from data theft by insider threat and cyberattack. SOX act is applicable to all publicly traded companies in the United States as well as wholly owned subsidiaries and foreign companies that are publicly traded.

Compliance is essential for an organization to avoid malpractices in their day to day business operations, during these unprecedented times of challenge change such as what we are experiencing today. There are a lot of changes in the way we do business during the times of COVID. Workplaces have been replaced by home offices due to which there are challenges to enforce Compliance resulting in an increased risk of fraud.

Given the current COVID-19 situation while many employees are working from home or remote areas, there is an increased challenge of managing resources or time. Being relevant to the topic, on user provisioning, there are risks like, identification of unauthorized access to the system for individual users based on the roles or responsibility.

Most organization follow a defined process in user provisioning like, sending a user access request with relevant details including:


  • User name
  • User Type
  • Application
  • Roles
  • Obtaining line manager approval
  • Application owner approval

And so on, based on the policy requirement and finally the IT providing an access grant. Several organizations have been still following a manual process, thereby causing a security risk.

The traditional way of processing a user provisioning request, especially during the time of COVID-19 has become complicated. This is due to shortage of resources or lack of resource availability to resolve a task. Various reasons are:


  • Different time zone
  • No back-up resources
  • Change in business plan
  • Change in priority request

In such a situation automation plays an important role. Automation has helped in reduction of manual work, labor cost, dependency/ reliance of resource and time management. An automation process built with proper design, tools, and security reduces the risk of material misstatement, unauthorized access, fraudulent activity, and time management. Usage of ServiceNow has also helped in tracking and archiving of evidence (evidence repository) essential for Compliance. Effective Compliance results in better business performance.


Intelligent Automation for SOX Compliance can bring in significant benefits like agility, better quality and proactive Compliance. The below table provides further details on the IT general controls.

Example – User Access Management


Risk Control Manual Automation
Unauthorized users are granted access to applicable logical access layers. Key financial data / programs are intentionally or unintentionally modified. New and modified user access to the software is approved by authorized approval as per the company IT policy. All access is appropriately provisioned. Access to the system is provided manually by IT team based on the approval given as per the IT policy and roles and responsibility requested.

SOD (Segregation Of Duties) check is performed manually by Process Owner/ Application owners as per the IT Policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.


BOT checks for SOD role conflict and provides the information to the Process Owner/Application owners as per the policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.
Unauthorized users are granted privileged rights. Key financial data/programs are intentionally or unintentionally modified. Privileged access, including administrator accounts and superuser accounts, are appropriately restricted from accessing the software. Access to the system is provided manually by the IT team based on the given approval as per the IT policy.


Manual validation check and approval to be provided by Process Owner/ Application owners on restricted access to the system as per IT company policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.

BOT can limit the count and time restriction of access to the system based on the configuration.
Unauthorized users are granted access to applicable logical access layers. Key financial data/programs are intentionally or unintentionally modified. Access requests to the application are properly reviewed and authorized by management User Access reports need to be extracted manually for access review by use of tools or help of IT.

Review comments need to be provided to IT for de-provisioning of access.
BOT can help the reviewer to extract the system generated report on the user.


BOT can help to compare active user listing with HR termination listing to identify terminated user.

BOT can be configured to de-provision access of user identified in the review report on unauthorized access.
Unauthorized users are granted access to applicable logical access layers if not timely removed. Terminated application user access rights are removed on a timely basis. System access is de-activated manually by IT team based on the approval provided as per the IT policy. System access can be deactivated by use of auto-provisioning script designed as per the company IT policy.

BOT can be configured to check the termination date of the user and de-active system access if SSO is enabled.

BOT can be configured to deactivate user access to the system based on approval.

The table provides a detailed comparison of the manual and automated approach. Automation can bring in 40-50% cost, reliability, and efficiency gains. The maturity model requires a three step process of standardization, tools adoption and process automation.


For more details or enquires, please write to marketing@relevancelab.com


0

PREVIOUS POSTSPage 1 of 7NO NEW POSTS