Your address will show here +12 34 56 78
2020 Blog, Blog, Cloud Blog, command blog, Featured

For Large Enterprise and SMBs with multiple AWS accounts, monitoring and managing multi-accounts is a huge challenge as these are managed across multiple teams running too few hundreds in some organizations.


AWS Control Tower helps Organizations set up, manage, monitor, and govern a secured multi-account using AWS best practices.



Benefits of AWS Control Tower:


  • Automate the setup of multiple AWS environments in few clicks with AWS best practices
  • Enforce governance and compliance using guardrails
  • Centralized logging and policy management
  • Simplified workflows for standardized account provisioning
  • Perform Security Audits using Identity & Access Management

Features of AWS Control Tower:


a) AWS Control Tower automates the setup of a new landing zone which includes,


  • Creating a multi-account environment using AWS Organizations
  • Identity management using AWS Single Sign-On (SSO) default directory
  • Federated access to accounts using AWS SSO
  • Centralized logging from AWS CloudTrail, and AWS Config stored in Amazon S3
  • Enable cross-account security audits using AWS IAM and AWS SSO

b) Account Factory


  • This helps to automate the provisioning of new accounts in the organization.
  • A configurable account template that helps to standardize the provisioning of new accounts with pre-approved account configurations.

c) Guardrails


  • Pre-bundled governance rules for security, operations, and compliance which can be applied to Organization Units or a specific group of accounts.
  • Preventive Guardrails – Prevent policy violations through enforcement. Implemented using AWS CloudFormation and Service Control Policies
  • Detective Guardrails – Detect policy violations and alert in the dashboard using AWS Config rules

d) 3 types of Guidance (Applied on Guardrails)


  • Mandatory Guardrails – Always Enforced. Enabled by default on landing zone creation.
  • Strongly recommended Guardrails – Enforce best practices for wel-architected, multi-account environments. Not enabled by default on landing zone creation.
  • Elective guardrails – To track actions that are restricted. Not enabled by default on landing zone creation.

e) Dashboard


  • Gives complete visibility of the AWS Environment
  • Can view the number of OUs (Organization Units) and accounts provisioned
  • Guardrails enabled
  • Check the list of non-compliant resources based on guardrails enabled.

Steps to setup AWS CT:


Setting up a Control Tower on a new account is relatively simpler when compared to setting it up on an existing account. Once Control Tower is set up, the landing zone should have the following.


  • 2 Organizational Units
  • 3 accounts, a master account and isolated accounts for log archive and security audit
  • 20 preventive guardrails to enforce policies
  • 2 detective guardrails to detect config violations


The next step is to create a new Organizational unit and then create a new account using the account factory and map it to the OU that was created. Once this is done, you can start setting up your resources and any non-compliance starts reflecting in the Noncompliant resources’ dashboard. In addition to this, any deviation to the standard AWS best practices would be reflected in the dashboard.


Conclusion:


With many of the organizations opting for and using AWS cloud services, AWS Control Tower with the centralized management service offers the simplest way to set up and govern multiple AWS accounts securely through beneficial features and established best practices. Provisioning new AWS accounts are as simple as clicking a few buttons while agreeing to the organization’s requirements and policies. Relevance Lab can help your organization to build AWS Control Tower and migrate your existing accounts to Control Tower. For a demo of Control Tower usage in your organization click here. For more details please reach out to marketing@relevancelab.com


0

2020 Blog, Blog, Cloud Blog, command blog

The industry has witnessed a remarkable evolution in Cloud, Virtualization, and Networking over the last decade. From a futuristic perspective, enterprises should structure the cloud including account creation, management of cloud; and provide VPC (Virtual Private Cloud) scalability and optimize network connectivity model for clouds.


Relevance Lab focuses on optimizing network connectivity model using AWS VPC and services. We are one of the premium partners of AWS and with our extensive experiense in cloud engineering including Account Management, AWS VPC and Services, VPC peering and On-premises connectivity. In this blog, we wil cover a more significant aspect of connectivity.


As an initial step, we help our customers optimize account management, including policies, billing, structuring the accounts and automate account and policy creations with pre-built templates. Thereby we help leverage AWS Control tower and landing zone.


Distributed Cloud Network and Scalability Challenges


As the AWS cloud Infrastructure and applications are distributed across the globe including North America, South America, Europe, China, Asia Pacific, South Africa, and the Middle East, it brings in much-anticipated scalability issues. Hense, we bring in the best practices while designing the cloud network, including:


  • Allocate VPC networks for each region; make sure this does not overlap with On-premises and across VPC peering.
  • Subnet sizes are permanent and cannot change, leave ample room for future growth.
  • Create EC2 instance growth chart considering your Organization, Customer, and partner’s requirements and their Global distribution. Forecast for the next few years.
  • Adapt to the Route scalability supported by AWS

AWS Backbone provides you private connectivity to AWS services, and hense no traffic leaves the backbone, and it avoids the need to set up NAT gateway to connect to the services from On-premises. This brings in enhanced security and cost-effectiveness.


Amazon S3 and DynamoDB services are the services supported behind Gateway Endpoint, and these are freely available to provision.


VPC Peering is the simplest and secure way to connect between two VPCs, and one VPC cannot transit the traffic for other VPCs. The number of peering requirements is very high among the enterprises due to these design challenges, and it grows as the VPC scales. hense, we suggest the VPC peering connectivity model if the number of VPC’s are limited to 10.


On-premises connectivity models for cloud – Our Approach


The traditional connectivity model which connects site-to-site VPN between VGW (Virtual Private Gateway) and Customer gateway provides lesser bandwidth, less elastic and cannot scale.


A direct connect gateway may be used to aggregate VPC’s across geographies. Relevance Labs recommends introducing a transit gateway between VGW and Direct Connect Gateway. This provides a hub and spoke design for connecting VPCs and on-premises datacentre or enterprise offices. TGW can aggregate thousands of VPCs within a region, and it can work across AWS accounts. If VPCs across multiple geographies are to be connected, TGW can be placed for each region. This option allows you to connect between the on-premises Data Center and up to three TGWs spanning across regions; each of them can aggregate thousands of VPCs.



This model allows connecting to the colocation facility between VPC & Customer Data Center and provides bandwidth up to 40 Gig through link aggregation.


Relevance Lab is evaluating the latest features and helping the Global Customers to migrate to this distributed cloud model which can provide high Scalability and bandwidth along with cost optimization and security.


For more information feel free to contact marketing@relevancelab.com


0

2020 Blog, Blog, command blog, Featured

Automation with simple scripts is relatively easy, but complexity creeps in to solve real-world production-grade solutions. A compelling use case was shared with us by our large Financial Asset management customer. They deal with this customer who provides a large number of properties & financial data feeds with multiple data formats coming in different frequencies ranging from daily, weekly, monthly and ad-hoc. The customer business model is driven based on Data processing on these feeds and creating “data-pipelines” for ingestion, cleansing, aggregation, analysis, and decisions from their Enterprise Data Lake.


The current Eco-System of customer comprises multiple ETL Jobs, which connects to various internal, external systems and converts into a Data Lake for further data processing. The complexity was enormous as the volume of data was high and lead to high chances of failures and indeed required continuous human interventions and monitoring of these jobs. Support teams receive a notification through emails when a job is only completed successfully or on failure. Thus, the legacy system makes job monitoring and exception handling quite tricky. The following simple pictorial representation explains a typical daily Data Pipeline and associated challenges:



The legacy solution has multiple custom scripts implemented in Shell, Python, Powershell that would make a call to Azure Data Factory via an API call to run a pipeline. Each independent task had its complexities, and there was a lack of an end to end view with real-time monitoring and error diagnostics.


A new workflow model was developed using the RLCatalyst workflow monitoring component, (using YAML definitions) and the existing customer scripts were converted to RLCatalyst BOTs using a simple migration designer. Once loaded into RLCatalyst Command Centre, the solution provides a real-time and historical view with notifications to support teams for anomaly situations and ability to take auto-remediation steps based on configured rules.


We deployed the entire solution in just three weeks in the customer’s Azure environment along with migrating the existing scripts.



RLCatalyst Workflow Monitoring provides a simple and effective solution much different from the standard RPA tools. RPA deals with more End-User Processing workflows while RLCatalyst Workflow Monitoring is more relevant for Machine Data Processing Workflows and Jobs.


For more information feel free to contact marketing@relevancelab.com


0