Your address will show here +12 34 56 78
2020 Blog, Blog, Feature Blog, Featured

With growing use of AWS Cloud across different industry segments for frictionless business, the use case of “Enabling Scientific Research” leveraging Cloud has unique benefits. Research is a very specialized field driven by a community of “Researchers” who want to focus on “Discovering Science than Servers”. Researchers day-to-day work requires processing data, collaborating online, and trying to maintain labs remotely. There is a need to democratize research computing so that everyone can use that easily.

Working closely with our AWS partners, Relevance Lab is creating an AWS “Research Workbench” powered by Intelligent Automation that can enable use of Cloud by Research Institutions and Researcher’s a frictionless manner.

Core functionality needed

  • Basic need of High End and Research focused enterprises to be able to leverage AWS products seamlessly for research oriented business needs.
  • Specialized roles – Principle Investigator, Researchers under one or many Research projects with different funding sources (Public and private).
  • Ability to collaborate with Intramural and Extramural researchers.
  • Specialized tools and software needs for an Analytics solution – AWS SageMaker, EMR, AI/ML, HPC, data security, secure Workspaces, large data sets sharing capability etc.
  • Need for proper AWS Management & Governance with the ability to manage Self-Service (ITSM or custom portals) based lifecycle management (Provisioning, Managing, De-provisioning of users and assets).
  • Proper cost and budget management and controls.

Additional challenges for Research Projects

  • Massive Volumes of Data.
  • Cross functional research teams.
  • Research data management with compliance and security considerations.
  • Leveraging new techniques of AI/ML, serverless computing, spot instances for HPC etc.

Scientific community has to adapt these challenges and AWS Cloud provides the platform for collaboration, on-demand resources and scale in a secure and compliant manner. Bringing together relevant AWS tools to create a bundle of Research Workbench makes this easier.

Catering to research needs special attention to the use-cases that may come up. For example a researcher may be working on a data science project using AWS Sagemaker notebooks and a large volume of research data in an S3 bucket. Given the sensitive nature of data, the access to the bucket may need to be secured within the organization and accessible only from within the specific network. Also a researcher may only need to access his own data and computing resources. We have developed a security model around the same which addresses such needs. The researchers can only access the resources from a Workspace created for them for that purpose.


To cater to the above the solution encompasses a “Research Portal” for user interactions and a specialized “Research Workbench” for collaborating on tools and data.

  • Research Portal – Managed with existing ITSM Self Service Portals like ServiceNow.
  • Research Workbench – Created by using AWS standard products, Service Catalog and Control Tower to enforce governance.

The above features allow creating and managing the lifecycle of a Research within an enterprise by leveraging investments in existing ITSM Portal and providing a seamless experience for AWS consumption. The solution leverages existing best practices of AWS Control services with Control Tower, Service Catalog, secure Access and automated provisioning/deprovisioning of resources. A critical part of such a Research Portal is proper cost management and tracking of research budgets and consumption against the same.

The following diagram explains the building blocks of a Research Workbench solution deployed with integration to ITSM Platforms like ServiceNow and using the AWS Service Management connector.


The reference deployment architecture using AWS Control Tower (CT) best practices is explained below. The access is controlled using AWS Simple AD and IAM roles.


The entire cycle of onboarding new researchers and provisioning assets for their research is automated using RLCatalyst BOTs solution with 1-Click deployment while still following the ITSM best practices as explained below.


Research Workbench Features
Following is a sample list of features planned (this is an indicative list only and not comprehensive)


Summary of Solution benefits
Based on the pre-built functionality of ServiceNow Self Service Portal, AWS standard products and our custom solutions are integrating the two platforms with a specialized research focussed use case. The following benefits includes:


  • Quick start solution targeting Academic and Research Institutions – New and existing AWS customers.
  • Existing customers with ITSM investments.
    • Using existing ITSM platforms (ServiceNow, Jira Service Desk, Freshservice).
  • Focusing on primarily “Built on AWS Solution” with standard products.
    • AWS Control Tower, Service Catalog, ITSM Connector, Sagemaker, Workspaces, EC2, S3, RDS, EMR etc.
  • Deployment options.
    • Per customer Research Solution deployment (using customer Cloud and ITSM resources).
    • Hosted solution offered to customers with (Managed Services based Cloud and ITSM platforms).
  • RLCatalyst leveraged Solution(Automation, Service Portal, Observability and Cost Governance) add-ons.
  • Pre-built solution to address 80-90% standard needs with scope of some customer specific customizations.
  • Ability to on-board new customer in 3-4 weeks based on pre-built offering with agility and low onboarding costs.

For a demo video please click here

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Feature Blog, Featured, ServiceOne

AWS provides a Service Management Connector for ServiceNow and Jira Service Desk end users to provision, manage and operate AWS resources securely via ITSM Portal. However, a similar solution does not exist for FreshService. The same maturity of end to end automation for Freshservice customers can be provided by using Relevance Lab’s RLCatalyst BOTs solution. It will provide an Automation Service Bus between ITSM tools and AWS Cloud assets.

Freshservice is an Intelligent Service Management platform, which comprises of all the essential modules like Incident Management, Problem Management, Change Management, Release Management, Project Management, Knowledge Management and Asset Management including Hardware, Software and Contracts. It also provides consolidated reports including analytics.

Many customers are adopting Freshservice as an ITSM cloud based solution and orchestrating self-service requests for organizations. One of the common automation needs is for User and Workspace onboarding and offboarding that involves integration with HR systems, AWS Service Catalog and AWS Control Tower for proper management and governance. Similarly using Infrastructure As Code model, organizations are using Cloud Formation based template models for complex workloads provisioning with 1-Click models.

The Freshservice workflow automator with RLCatalyst BOTs integration helps in automation of simple repetitive tasks like assignment of tickets to the right groups, and setup of multi-level approvals. It is a simple drag and drop interface which can help to automate most of the simple use cases. In addition, the webhook option allows automation of complex workflows or use cases by integrating with the right automation tools. In addition to this, the business rules for forms feature will enable you to describe conditional logic and actions to create complex dynamic forms.

The below diagram illustrates the Integration Architecture between FreshService, AWS and RLCatalyst.


Using the integrated solution, organizations can automate use cases related to both End User Computing (EUC) and other standard Server side workloads provisioning. Two common examples are :

  • User and Workspace Provisioning : Onboard a new user and request for an AWS workspace where the original request is generated by Workday/Taleo.
  • Server Infrastructure Provisioning, Application Deployment and Configuration Updates : Request for provisioning of a complex multi-node workload using Service Catalog item fulfilled with an AWS Cloud Formation template and post provisioning setup.

The below diagram illustrates the following EUC automation.


The steps to Onboard a new user and Workspace in an automated are as follows.

  • RLCatalyst enables Freshservice to create an Service Request(SR) using the file generated from Workday or Taleo.
  • Once an SR is created, the workflow automator of Freshservice triggers the approval workflow for either auto approval, cost based approval or role-based approval.
  • Based on the approval workflow defined, and successful execution of the same, the next step is to request RLCatalyst to trigger the onboarding workflow within RLCatalyst.
  • RLCatalyst, then enables the BOT 1for creation of a user in simple AD.
  • BOT 2 sends out a request for provision of AWS workspace, while the BOT3 looks for the status of the workspace creations.
  • Once the status is received on the successful provision by the BOT3, the workflow instructs the AWS SNS to send out a notification email to the end user with the workspace details and login credentials.
  • Finally, RLCatalyst sends a request back to Freshservice for the successful closure of the SR.
  • In case of failure of workspace provision, RLCatalyst will instruct Freshservice to create an Incident to check for the Root Cause Analysis(RCA).

Similarly, a user can request for a multi-node application stack deployment in AWS using Freshservice service catalog. The below diagram illustrates the following :


  • Create the infrastructure with multiple AWS resources (EC2, S3, RDS etc).
  • Deploy one or more applications on the instances created (Web Tier, App Tier, DB Tier).
  • Configure the application with the run-time information. e.g. DNS endpoint creation, bind the listening IP address of an application to the IP address of the instance created. Then update YAML files with environment variable values etc.
  • Deploy the monitoring agents like Infra health, App health, Log monitoring and Service Registry.
  • Setup network configurations like hosted zones, routes etc and setup security configurations like SSL certificates.

The multi-stage orchestration requires a workflow for state and context management during the lifecycle and this is provided by using RLCatalyst Workflow capabilities.

Relevance Lab is a solution partner of Freshservice. We assist the enterprises to adopt AWS Cloud with intelligent automation using RLCatalyst BOTs. Relevance Lab also offers a pre-integrated solution of ServiceOne with Freshservice.

For a demo video and for more details,  please click here.

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, Blog, Cloud Blog, Featured, RLCatalyst Blog

The adoption of Cloud and DevOps has brought changes in large enterprises around the traditional management methodology of Infra, Middleware and Applications lifecycle. There is a continuous “tension” to achieve the right balance between “security + compliance” vs “agility + flexibility” between Operations and Development teams. For large enterprises with multiple business units and global operations and having distributed assets across multiple cloud providers, these issues are more complex. While there is no “silver bullet” that can solve all these issues, every enterprise needs a broad framework for achieving the right balance.

The broad framework is based on the following criteria:

  • IT teams predominantly define the infrastructure components like images, network designs, security policies, compliance guardrails, standard catalogs etc. based on the organization’s policies and requirements.
  • Application teams have the flexibility to order and consume these components and to manage post provisioning lifecycle specific to their needs.

The challenge being faced by larger enterprises using multiple cloud workloads is the lack of a common orchestration portal to enable application teams to have self-service requests and flexible workflows for managing workload configuration and application deployment lifecycle. The standard Cloud management portals from the major cloud providers have automated most of their internal provisioning processes, yet don’t provide customers system-specific solutions or do workload placement across various public and private clouds. In order to serve the needs of Application groups a portal is needed with following key functionalities.


  • The self-service portal is controlled via role-based access.
  • Standard catalog of items for Infrastructure Management.
  • Flexible workflow for creating a full lifecycle of configurations management.
  • Microservices-based building blocks for consuming “INFRASTRUCTURE AS A CODE” and manage post provisioning lifecycle.
  • Ability to monitor the end to end provisioning lifecycle with proper error handling and interventions when needed.
  • Governance and management post provisioning across multiple workloads and cloud services.

Relevance Lab has come up with a microservices-based automation solution which automates enterprise multi-cloud provisioning, pre and post, provisioning workflows, workload management, mandatory policies, configurations, and security controls. The end to end provisioning is automated and made seamless to the user by integrating with ServiceNow, Domain server, configuration servers and various cloud services. There are multiple microservices developed to handle each stage of the automation, making it highly flexible to extend to any cloud resources.

The building blocks of the framework are as shown here:


The IAAC which is maintained in a source code repository can have the cloud templates for a variety of resources.


Resource Platform Automated Process
Compute – VM/Server VMWare, AWS, Azure, GCP Automated provisioning of VMs and the backup VMs
Compute – DB Server VMWare, AWS, Azure, GCP Automated provisioning of the DB servers and Backup servers – Oracle, PostgresSQL, MSSQL, MySQL, SAP
Compute – HA and DR VMWare, AWS, Azure, GCP Automated provisioning of HA and DR servers
Compute – Application Stack AWS, Azure Automated Provisioning of Application stack using CFTs and ARM templates
Network – VPC AWS, Azure, GCP Automated provisioning of VPCs and subnets
Storage AWS, Azure, GCP Automated provisioning of S3 buckets or Blob storage
Storage – Gateways AWS Automated provisioning of storage gateways
DNS Server AWS, Azure Automated provisioning of DNS servers


Getting Started with Hybrid Cloud Automation – Our recommendations:

  • Generate standard cloud catalogue and create reusable automated workflows for processes such as approval and access control.
  • To optimize the management of resources, limit the number of blueprints. Specific features can be provisioned in a modular fashion in the base image.
  • Use configuration management tool like Chef/Puppet/Ansible to install various management agents.
  • Use “INFRASTRUCTURE AS A CODE” principle to provision infrastructure in an agile fashion. It needs tools like Github, Jenkins and any configuration management tool.

Benefits:

  • Significantly reduce the Operations cost by reducing the manual effort and proactive monitoring services using a single platform.
  • Reduced time to market for new cloud services by enabling a single-click deployment of cloud services.

For more details, please feel free to reach out to marketing@relevancelab.com


0

2020 Blog, Blog, Featured

ServiceNow is the dominant platform used by the organizations for IT Service Management. Organizations are using ServiceNow to build digital workflows and drive frictionless business. By leveraging DevOps & Automation, organizations can speed up software release and upgrade cycles.

With two major releases per year and quarterly updates of security patches, ServiceNow, has ensured that the new features are up to date as per the current industry trend and in compliance with the security mandates. However, to ensure that the customer gets the benefit of all these new features and security updates, organizations need to ensure they update to the latest version on a timely basis. The onus is on the individual organizations to ensure all their customizations are tested thoroughly after every upgrade or security update. Some of these upgrades can run into a few hundreds of test cases in an organization. Testing each of these features after every upgrade would typically take a few weeks to a few months based on the number of test cases. Many organizations are building custom applications on top of the ServiceNow platform, which adds burden on testing during upgrades.

ServiceNow has come out with an Automated Test Framework (ATF) from Istanbul version and above, which can automate testing and reduce the time taken from a few weeks to a few days. ATF is intended for Regression Testing and will ensure that your existing functionality remains intact. It enables no-code and low-code users to create automated test scenarios with ease. ATF reduces bottlenecks related to upgrades by reducing manual testing significantly, with minimal business impact and fasten development efficiency.

Benefits of ServiceNow ATF:

  • Free and Out of Box (OOB) feature without any add-on cost.
  • Fast track upgrade and development time by shifting manual testing to automated testing.
  • Validate all your customizations with every change/update/upgrade.
  • Reduction of manual errors due to consistency in the way the test cases are run.
  • Reusable and simple to use.
  • Testing can be executed along with development resulting in better quality output.

As shown in the above example, a test case with about 10 scenarios which would typically take 10 hours in a normal scenario would take only about an hour with ServiceNow ATF. This can be achieved creating and running batches of tests with automated test suites. Tests can be grouped together using test suites and this enables to run a group of test cases as a single job.

What is the Automated Test Framework?
ATF is a tool to streamline the upgrade and QA processes by building automated tests to check if software or configuration changes have potentially ‘broken’ any existing functionality. It also means developers would no longer be required to start operational activities like code refactoring to generate new test cases.


Customer Solution :
Relevance Lab has helped a large US based Digital learning company benefit from Intelligent Automation of their ServiceNow instance with their ATF. The customer uses ServiceNow extensively for ITSM, IT Asset Management, GRC, IT Operations Management, Vulnerability Remediation life cycle. Relevance Lab has implemented extensive automation of servicenow tickets (Incident, Problem, Change, Service Requests, Vulnerability Incident tickets, CMDB etc.) using their RLCatalyst product. The automation has implemented a number of customised forms, workflows and data schema which needs to be validated everytime a servicenow instance is upgraded. The normal cycle of upgrade would take about a week, but ensuring complete testing post upgrade took upto 3 weeks. To cut down on the cycle time and increase the quality, the entire upgrade cycle and associated functionalities were automated for testing using ServiceNow ATF. This helped in reduction of testing effort of 3 weeks for 400 test cases (104 flows) to 0.5 days using ServiceNow ATF with over 90% reduction in testing efforts and more accurate quality output.

The test cases varied across the below top categories

  • SAML SSO.
  • Okta Provision.
  • User Access Requests.
  • Bot Automation.
  • Asset Catalog.
  • Change Management.
  • Surveys.
  • Contract Management.
  • Asset Management.
  • Knowledge Management
  • Reports & Dashboards.
  • GRC & GDPR.

Relevance Lab is a partner of ServiceNow and helps organizations extract maximum ROI of the ServiceNow Platform. As part of this, we help organizations adopt the automated test reusable framework for all change requests, security updates or even major version upgrades.


For a demo of ServiceNow ATF, please click here.

For more details, please feel free to reach out to marketing@relevancelab.com

0

2020 Blog, Blog, Featured, ServiceOne

As enterprises adopt popular Agile and DevOps tools and solutions from Atlassian, it is essential to create an end to end automation pipeline covering ITSM workflows. Integration of Software Development Lifecycle (SDLC) tools, with cloud infrastructure platforms like AWS, can provide faster software deliveries with CI/CD, infrastructure automation and continuous production monitoring. RLCatalyst Intelligent Automation solutions complement the platform with an enterprise BOTs Automation solution and a mature end-to-end monitoring Command Centre solution. This blog details out an integrated solution between AWS Service Management Connector for Jira Service Desk enterprise workflows of User Onboarding + Asset Provisioning lifecycle.

The AWS Service Management Connector for Jira Service Desk (Jira SD) allows Jira Service Desk end-users to provision, manage, and operate AWS resources natively via Atlassian’s Jira Service Desk. Jira Service Desk Cloud module supports AWS Service Catalog Connector, and the Jira Service Desk Data Centre & Server module supports AWS Service Management Connector.

Jira SD admins can create and provide secured, governed AWS resources to end-users via service catalog, execute automation playbooks via AWS system manager and finally track the resources in a Config Item view powered by AWS config.

On downloading the connector from the Atlassian marketplace for no additional cost, you need to connect it with your AWS account, preferably governed by AWS Control Tower for enhanced security.

The AWS Service Catalog allows you to provision or terminate and centrally manage commonly deployed AWS resources like workspaces. AWS resources like workspaces can be pre-approved, provisioned or terminated based on approval.

Similarly, the AWS Service Management Connector allows Jira SD users to fulfil all the related operational activities. Some of them are listed below.


  • Migrate or Manage CloudWatch Agent.
  • Manage Amazon Inspector Agent.
  • Apply Ansible Playbooks or Chef Recipes on AWS managed instances.
  • Apply Patches from baseline.
  • Change the standby state of an EC2 instance in an auto-scaling group.
  • Attach an additional EBS Volume to the EC2 instance.
  • Attach IAM to an Instance.
  • Install or Uninstall a Distributor package.
  • Configure CloudTrail Logging.
  • Export Metrics and log files from your instances to Amazon CloudWatch.
  • Configure an instance to work with containers and Dockers.
  • Enable or disable live patching on Linux EC2 instances.
  • Configure S3 bucket logging.
  • Enable or disable Windows Updates.
  • Copy Snapshot created.
  • Create DynamoDB backup.
  • Create a new AMI from an EC2 instance.
  • Create an RDS snapshot for an RDS instance.
  • Create an incident in ServiceNow.


As shown in the above diagram, Relevance Lab helps enterprises already on AWS & Jira Service Desk, to integrate the two using AWS Service Management Connector. The integration enables a seamless process to create custom workflows like the creation of auto-approval, cost-based approval and role-based approval. Likewise, raise an incident in case of any failure of the resources provisioned or terminated and create change requests for every update of the workloads.

Benefits of AWS Service Management Connector for Jira Service Desk:

  • Free and Out of Box (OOB) feature without any add-on cost.
  • Support multiple AWS accounts and ensure governance through AWS CT.
  • Provision and Maintenance of AWS resources through one platform (Jira SD).
  • Easy to use by the IT admins without in-depth knowledge of AWS platform.
  • Multiple Portfolios and Service Catalogs for different departments within an Organization.
  • Represent Config Items in a tree structure.
  • Run most of the automated documents in AWS system manager through Jira SD.

The end to end orchestration of User Onboarding and Asset provisioning leverages the out of box features for AWS and Atlassian tools. However, for many real-world scenarios, the complex workflows need integration with other third-party tools like AD, OKTA, HR systems (Workday/Taleo) and compliance solutions. In situations that require more complex workflows and third-party integrations RLCatalyst BOTs solution is integrated with AWS and Atlassian solutions to provide lifecycle automation and observability post provisioning.


Conclusion:
Relevance Lab is a partner of AWS and a DevOps specialist company implementing Atlassian solutions. We help organizations adopt AWS Service Management Connector with ITSM tools like Jira Service Desk and ServiceNow. Integration of AWS Service Management Connector provides a common interface and ease for all L1 and L2 activities for the ITSM users to manage AWS resources. Our RLCatalyst based Intelligent Automation and Command Centre complement these solutions to bring in greater efficiencies.


Click here for a demo video.

For more details, please feel free to reach out to marketing@relevancelab.com

0

2020 Blog, Blog, BOTs Blog, DevOps Blog, Featured

Many organizations include the assessment of fraud risk as a component of the overall SOX risk assessment and Compliance plays a vital role.

The word SOX comes from the names of Senator Paul Sarbanes and Representative Michael G. Oxley who wrote this bill in response to several high-profile corporate scandals like Enron, WorldCom, and Tyco in United Station. The United States Congress passed the Sarbanes Oxley Act in 2002 and established rules to protect the public from the corporates, doing any fraudulent or following invalid practices. Primary objective was to increase transparency in the financial reporting by corporations and initialize a formalized system of checks and balances. Implementing SOX security controls help to protect the company from data theft by insider threat and cyberattack. SOX act is applicable to all publicly traded companies in the United States as well as wholly owned subsidiaries and foreign companies that are publicly traded.

Compliance is essential for an organization to avoid malpractices in their day to day business operations, during these unprecedented times of challenge change such as what we are experiencing today. There are a lot of changes in the way we do business during the times of COVID. Workplaces have been replaced by home offices due to which there are challenges to enforce Compliance resulting in an increased risk of fraud.

Given the current COVID-19 situation while many employees are working from home or remote areas, there is an increased challenge of managing resources or time. Being relevant to the topic, on user provisioning, there are risks like, identification of unauthorized access to the system for individual users based on the roles or responsibility.

Most organization follow a defined process in user provisioning like, sending a user access request with relevant details including:


  • User name
  • User Type
  • Application
  • Roles
  • Obtaining line manager approval
  • Application owner approval

And so on, based on the policy requirement and finally the IT providing an access grant. Several organizations have been still following a manual process, thereby causing a security risk.

The traditional way of processing a user provisioning request, especially during the time of COVID-19 has become complicated. This is due to shortage of resources or lack of resource availability to resolve a task. Various reasons are:


  • Different time zone
  • No back-up resources
  • Change in business plan
  • Change in priority request

In such a situation automation plays an important role. Automation has helped in reduction of manual work, labor cost, dependency/ reliance of resource and time management. An automation process built with proper design, tools, and security reduces the risk of material misstatement, unauthorized access, fraudulent activity, and time management. Usage of ServiceNow has also helped in tracking and archiving of evidence (evidence repository) essential for Compliance. Effective Compliance results in better business performance.


Intelligent Automation for SOX Compliance can bring in significant benefits like agility, better quality and proactive Compliance. The below table provides further details on the IT general controls.

Example – User Access Management


Risk Control Manual Automation
Unauthorized users are granted access to applicable logical access layers. Key financial data / programs are intentionally or unintentionally modified. New and modified user access to the software is approved by authorized approval as per the company IT policy. All access is appropriately provisioned. Access to the system is provided manually by IT team based on the approval given as per the IT policy and roles and responsibility requested.

SOD (Segregation Of Duties) check is performed manually by Process Owner/ Application owners as per the IT Policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.


BOT checks for SOD role conflict and provides the information to the Process Owner/Application owners as per the policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.
Unauthorized users are granted privileged rights. Key financial data/programs are intentionally or unintentionally modified. Privileged access, including administrator accounts and superuser accounts, are appropriately restricted from accessing the software. Access to the system is provided manually by the IT team based on the given approval as per the IT policy.


Manual validation check and approval to be provided by Process Owner/ Application owners on restricted access to the system as per IT company policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.

BOT can limit the count and time restriction of access to the system based on the configuration.
Unauthorized users are granted access to applicable logical access layers. Key financial data/programs are intentionally or unintentionally modified. Access requests to the application are properly reviewed and authorized by management User Access reports need to be extracted manually for access review by use of tools or help of IT.

Review comments need to be provided to IT for de-provisioning of access.
BOT can help the reviewer to extract the system generated report on the user.


BOT can help to compare active user listing with HR termination listing to identify terminated user.

BOT can be configured to de-provision access of user identified in the review report on unauthorized access.
Unauthorized users are granted access to applicable logical access layers if not timely removed. Terminated application user access rights are removed on a timely basis. System access is de-activated manually by IT team based on the approval provided as per the IT policy. System access can be deactivated by use of auto-provisioning script designed as per the company IT policy.

BOT can be configured to check the termination date of the user and de-active system access if SSO is enabled.

BOT can be configured to deactivate user access to the system based on approval.

The table provides a detailed comparison of the manual and automated approach. Automation can bring in 40-50% cost, reliability, and efficiency gains. The maturity model requires a three step process of standardization, tools adoption and process automation.


For more details or enquires, please write to marketing@relevancelab.com


0

2020 Blog, AIOps Blog, Blog, Feature Blog, Featured

With growing Cloud adoption and need for Intelligent Infrastructure Automation, larger enterprises are adopting Hybrid-Cloud solutions like Terraform. Creating reusable templates for provisioning of complex infrastructure setups in Cloud and Data Centers, Orchestrating Cloud management with self-service Portals and full lifecycle monitoring of the assets can provide a flexible, reusable and scalable enterprise cloud solutions.
Relevance Lab is a Hashicorp partner with multiple successful enterprise infrastructure automation implementations using Terraform covering AWS, Azure, VMWare and GCP with 5000+ nodes setups.


Solution Highlights:

  • Our solution allows you to rebuild stacks using automation completely. It is instrumental in provisioning newer environments with minimal code changes.
  • It has the built in ability to replicate stacks across multiple regions with minimal code changes.
  • Capability to add / remove new instances to components with few code changes.
  • Simple code structure. Any new infrastructure needs can be easily provisioned by modifying the variables.
  • Ability to modify instances such as Volumes, Instance Sizing, AMI changes, Security groups with minimal code changes.

Tools & Technologies:
Terraform from HashiCorp has emerged as the best Infrastructure automation tool. Terraform helps in building, changing and versioning of the infrastructure efficiently. Terraform is a declarative and, with the help of configuration files, we can describe the components to be built and managed across the entire datacenter.


Design Considerations:
Below mentioned are some of our design considerations based on standard practices in infrastructure automation. These structures have helped us gain flexibility and ease in scaling stacks based on demand.

  1. Code Repo Structure:
  2. Each AWS stack is a separated GITHUB Repo while Terraform modules are a shared repo.

    • It makes the code design structure very scalable to create newer AWS stacks in a different region or re-build stacks in case of a disaster or scale more resources based on traffic/load.
    • A separate repo helps in maintaining isolation as each stack would have varied footprints of the resources.
    • It helps in security and compliance as audits can be performed against a specific stack.

  3. Segmentation:
  4. The below design model is showing the Automation build-out for each AWS account. Each layer is well segmented and can be easily scaled based on the needs. Making any specific change to each of the layers is easier.


  5. Integration:
  6. Fully integrated with GITHUB for Continuous Integration and Continuous Deployment.

    • Each change is performed on a branch which is merged via a Pull Request.
    • Each Pull request is reviewed and verified and combined with the Master branch.
    • Infrastructure changes are thoroughly tested during the PLAN stage and then terraform APPLY.

  7. Code reusability:
  8. Modules provide an easy way to abstract, common blocks of configuration into reusable infrastructure elements.

    • Modules can help with this as they significantly reduce duplication, enable isolation, and enhance testability.

Benefits:

  • Provides the ability to spin up an entire environment in minutes.
  • It reduces time to rollout complex network and storage changes to less than a few minutes.
  • Infrastructure is managed as a code, and all changes are tested well; resulting in fewer outages because of infra configuration changes.
  • It is easy to operate and maintain because terraform uses a declarative language.
  • Infra is Idempotent, and a state-based desired system.

Conclusion:
Using design best practices of terraform, enterprises can quickly build and manage infrastructure, which is highly scalable and efficient. Further, this automation can be hooked to a Jenkins pipeline project for automated code pushes for infra changes which can be tied to a standard release and deployment process.


  • Leveraging Chef for configuration management and managing all the application software installation and configurations via Chef cookbooks and recipes.
  • Leveraging InSpec for auditing the properties of the AWS resources.

There are few other additions which could be introduced to this design to do a tight bonding between security and compliance policies and infrastructure as code. That may be achieved by integrating with Sentinel It helps in prevention of infra provisioning if there are deviations in the Infra code which do not adhere to the security policies. Sentinel helps us in building a fine-grained, condition-based policy framework.


For more details or enquires, please write to marketing@relevancelab.com


0

2020 Blog, Blog, Featured, RLCatalyst Blog

RLCatalyst OKTA Integration  

Modern Identity and Access Management are foundational to building digital customer experiences. In the area of Intelligent Automation, it is a critical need to have a proper authentication & authorization system and audit trail for BOT led executions. As BOTs handle more workload and user interactions, there is a crucial need to have RLCatalyst BOTs integrated with enterprise IDAM platforms like OKTA using SAML 2.0. OKTA provides a modern platform for IDAM and further by using SAML 2.0 adapter RLCatalyst now supports more secure and flexible security for both UI and API based access to its automation functionality. Our solution provides frictionless integration between ServiceNow, OKTA, Windows AD, RLCatalyst BOTs Server hosted across Hybrid Cloud platforms.



SAML 2.0 is a widely accepted industry standard for user authentication. It separates authentication and authorization from the application and the system of record for users which in most organizations is Active Directory or any LDAP based system. SAML 2.0 standard defines two entities. The first entity is the Identity Provider (IdP) to which applications can request authentication for a user. The other entity is the application itself (Service Provider or SP). Using applications integrated to IdP using SAML 2.0, users in an organization only need to have one set of credentials to log in to any application. Therefore, administrators can centrally administer access to all applications.


RLCatalyst BOTs Server is an Intelligent Automation software that is in use with enterprises and supports single sign-on using the SAML 2.0 protocol. When a user tries to access the application, they will be redirected to the Identity Provider’s login screen. The IdP accepts the credentials and authenticates the user and then redirects the user back to the RLCatalyst (here the Service Provider or SP) with an Auth Token. The SP then provides access to the resource requested. In subsequent requests, the same auth token is passed by the user agent, and the SP validates the token against the IdP and then provides access to the resource.



Supporting the SAML 2.0 standard allows RLCatalyst to seamlessly work with multiple Identity Providers like Okta, Auth0, Ping Identity etc. It enables enterprises to integrate our automation platform seamlessly into their SSO roll-out plans, thus reinforcing their security and compliance.


For more information feel free to contact marketing@relevancelab.com


0

2020 Blog, Blog, Cloud Blog, command blog, Featured

For Large Enterprise and SMBs with multiple AWS accounts, monitoring and managing multi-accounts is a huge challenge as these are managed across multiple teams running too few hundreds in some organizations.


AWS Control Tower helps Organizations set up, manage, monitor, and govern a secured multi-account using AWS best practices.



Benefits of AWS Control Tower:


  • Automate the setup of multiple AWS environments in few clicks with AWS best practices
  • Enforce governance and compliance using guardrails
  • Centralized logging and policy management
  • Simplified workflows for standardized account provisioning
  • Perform Security Audits using Identity & Access Management

Features of AWS Control Tower:


a) AWS Control Tower automates the setup of a new landing zone which includes,


  • Creating a multi-account environment using AWS Organizations
  • Identity management using AWS Single Sign-On (SSO) default directory
  • Federated access to accounts using AWS SSO
  • Centralized logging from AWS CloudTrail, and AWS Config stored in Amazon S3
  • Enable cross-account security audits using AWS IAM and AWS SSO

b) Account Factory


  • This helps to automate the provisioning of new accounts in the organization.
  • A configurable account template that helps to standardize the provisioning of new accounts with pre-approved account configurations.

c) Guardrails


  • Pre-bundled governance rules for security, operations, and compliance which can be applied to Organization Units or a specific group of accounts.
  • Preventive Guardrails – Prevent policy violations through enforcement. Implemented using AWS CloudFormation and Service Control Policies
  • Detective Guardrails – Detect policy violations and alert in the dashboard using AWS Config rules

d) 3 types of Guidance (Applied on Guardrails)


  • Mandatory Guardrails – Always Enforced. Enabled by default on landing zone creation.
  • Strongly recommended Guardrails – Enforce best practices for wel-architected, multi-account environments. Not enabled by default on landing zone creation.
  • Elective guardrails – To track actions that are restricted. Not enabled by default on landing zone creation.

e) Dashboard


  • Gives complete visibility of the AWS Environment
  • Can view the number of OUs (Organization Units) and accounts provisioned
  • Guardrails enabled
  • Check the list of non-compliant resources based on guardrails enabled.

Steps to setup AWS CT:


Setting up a Control Tower on a new account is relatively simpler when compared to setting it up on an existing account. Once Control Tower is set up, the landing zone should have the following.


  • 2 Organizational Units
  • 3 accounts, a master account and isolated accounts for log archive and security audit
  • 20 preventive guardrails to enforce policies
  • 2 detective guardrails to detect config violations


The next step is to create a new Organizational unit and then create a new account using the account factory and map it to the OU that was created. Once this is done, you can start setting up your resources and any non-compliance starts reflecting in the Noncompliant resources’ dashboard. In addition to this, any deviation to the standard AWS best practices would be reflected in the dashboard.


Conclusion:


With many of the organizations opting for and using AWS cloud services, AWS Control Tower with the centralized management service offers the simplest way to set up and govern multiple AWS accounts securely through beneficial features and established best practices. Provisioning new AWS accounts are as simple as clicking a few buttons while agreeing to the organization’s requirements and policies. Relevance Lab can help your organization to build AWS Control Tower and migrate your existing accounts to Control Tower. For a demo of Control Tower usage in your organization click here. For more details please reach out to marketing@relevancelab.com


0

2020 Blog, Blog, DevOps Blog, Featured, ServiceOne

Using GIT configuration management integration in Application Development to achieve higher velocity and quality when releasing value-added features and products


ServiceNow offers a fantastic platform for developing applications. All infrastructure, security, application management and scaling etc.is taken up by ServiceNow and the application developers can concentrate on their core competencies within their application domain. However, several challenges are faced by companies that are trying to develop applications on ServiceNow and distribute them to multiple customers. In this article, we take a look at some of the challenges and solutions to those challenges.



A typical ServiceNow customization or application is distributed with several of the following elements:


  • Update Sets
  • Template changes
  • Data Migration
  • Role creation
  • Script changes

Distribution of an application is typically done via an Update Set which captures all the delta changes on top of a wel-known baseline. This base-line could be the base version of a specific ServiceNow release (like Orlando or Madrid) plus a specific patch level for that release. To understand the intricacies of distributing an application we have to first understand the concept of a Global application versus a scoped application.


Typically only applications developed by ServiceNow are in the global scope. However before the Application Scoping feature was released, custom applications also resided in the global scope. This means that other applications can read the application data, make API requests, and change the configuration records.


Scoped applications, which are now the default, are uniquely identified along with their associated artifacts with a namespace identifier. No other application can access the data, configuration records, or the API unless specifically allowed by the application administrator.


While distributing applications, it is easy to do so using update sets if the application has a private scope since there are no challenges with global data dependencies.


The second challenge is with customizations done after distributing an application. There are two possible scenarios.


  • An application release has been distributed (let’s call it 1.0).
  • Customer-1 needs customization in the application (say a blue button is to be added in Form-1). Now customer 1 has 1.0 + Blue Button change.
  • Customer-2 needs different customization (say a red button is to be added in Form-1)
  • The application developer has also done some other changes in the application and plans to release the 2.0 version of the application.

Problem-1: If application 2.0 is released and Customer-1 upgrades to that release, they lose the blue-button changes. They have to redo the blue-button change and retest.



Problem-2: If the developer accepts blue button changes into the application and releases 2.0 with blue button changes, when Customer-2 upgrades to 2.0, they have a conflict of their red button change with the blue-button change.



These two problems can be solved by using versioning control using Git. When the application developers want to accept blue button changes into 2.0 release they can use the Git merge feature to merge the commit of Blue button changes from customer-1 repo into their own repo.


When customer-2 needs to upgrade to 2.0 version they use the Stash feature of Git to store their red button changes prior to the upgrade. After the upgrade, they can apply the stashed changes to get the red button changes back into their instance.


The ServiceNow source control integration allows application developers to integrate with a GIT repository to save and manage multiple versions of an application from a non-production instance.


Using the best practices of DevOps and Version Control with Git it is much easier to deliver software applications to multiple customers while dealing with the complexities of customized versions. To know more about ServiceNow application best practices and DevOps feel free to contact: marketing@relevancelab.com


0

PREVIOUS POSTSPage 1 of 2NO NEW POSTS