Your address will show here +12 34 56 78
Blog, 2016 Blogs

As an organization, your success is dependent on how you manage the customer experience, either internal or external. The level of support and the turn around time of incident resolution play a pivotal role in the customer either being delighted or being extremely dissatisfied. Most technology firms have a defined Incident Management System that helps track, monitor and resolve the incident as best they can.


Typically L1/ L2 incidents are the primary and secondary line of support that receives requests/ complaints via different channels such as phone, web, email or even chats regarding some technical difficulty that needs attention. Organizations need to have a scalable, reliable and agile system that can manage incidents without loss of time to ensure that normal business operations are not impacted in any way. Even though we’re in the age of Artificial Intelligence and other technological innovations, many organizations continue to employ manual interventions which are a drain on time, effort and effectiveness.


Automation of Incident Management – perhaps the only way forward

Here are 6 good reasons to automate Incident Management:-


a) High Productivity, Low Costs – when manual interventions for repeatable tasks are lessened due to automation, it frees up staff to focus on more strategic and value-added business tasks. It brings costs down as there are less valuable man hours spent on tasks that are now automated.


b) Faster Resolution  – the entire process is streamlined where incidents are captured through self-service portals, or various other incident reporting mechanisms such as mail, phone, chat, etc. Automation allows for better prioritization and routing to the appropriate resolution group, resulting in faster and improved delivery of incident resolution.


c) Minimize Revenue Losses – If service disruptions are not looked into immediately, businesses stand to lose revenue which impacts reputation. Service delivery automation helps to minimize the negative impact of these disruptions, achieves higher customer satisfaction by restoring services quickly without impacting business stability. 


d) Better Collaboration – automation leads to better collaboration between business functions. Because of the bi-directional communication flow, detection, diagnosis, repair and recovery are achieved in record time due to the adoption of automation. Incident management becomes easier, faster and records improved performance due to clearly defined tasks, more collaboration due to increased communication. 


e) Effective Planning & Prioritization – Incidents can be prioritized based on the severity of the issue and can be automatically assigned or escalated with complete information to the appropriate task force. This helps in better scheduling and planning of overall incident management leading to a more efficient and effective management of issues. 


f) Incident Analysis For Overall Improvement of Infrastructure – with an analytics dashboard that you can call up periodically, you can assess the problem areas basis frequency, time, function, etc. This can lead to better-planned expenditure on future infrastructure investments.


Service delivery automation is the practical and sensible approach for organizations if they want to reduce human error, reduce valuable man-hours performing repetitive tasks and improve the quality of delivery significantly.


RLCatlayst is integrated with ServiceNow,  and helps in ticket resolution automation through its powerful BOTs framework. With RLCatalyst you can automate the provisioning of your infrastructure, keep a catalog of common software stacks, exercise your options on what to use, based on what’s available and improve the overall quality of service delivery. RLCatalyst is an intelligent software which can study discernible patterns and identify frequently logged tickets by users, thus enabling faster incident resolution.


RLCatalyst comes with a library of over 200 BOTS that you can use to customize based on your needs and requirements. With RLCatalyst you get the dual benefit of DevOps automation with service delivery automation, taking your organization to the next level of efficiency and productivity.


0

Blog, 2016 Blogs

Everywhere you look, it seems innovation on technology is rapidly mutating like there’s no tomorrow. Before you can say “automate” there’s a new application, product, or service that allows organizations and businesses to accomplish a whole truck load of manual tasks by way of tools and services that enable them to simply, do more.


Businesses today want to align their IT build strategy to business, automate and analyze business services and performance, explore new product and development opportunities, improve their service delivery, manage compliance, mitigate risk and at the same time ensure customer satisfaction.


To monitor performance across several functions continually, all the time balancing outages proactively requires proactive planning and insights.


Platform-as-a-service provider ServiceNow for example, helps you manage your IT, Business and Applications easily without a hassle. With PaaS, the IT function can focus on improving workflow efficiencies across DevOps by working collaboratively and speedily that results in faster delivery.


DevOps Automation And PaaS


Large enterprises with distributed assets and public cloud can benefit from a better managed workflow and automation. Fuzzy views of how your DevOps is performing can affect the delivery timelines, which constitute a whole bunch of processes including Log Management, Incident Management, Problem Resolution, Change Management, Configuration Management and Programmed CMDB among others can benefit with DevOps automation tools such as RL Catalyst. RL Catalyst for instance does not have a conventional approach to managing DevOps which are people and process-heavy. With in-built self-service remediation and integration with monitoring tools, RL Catalyst provides real time visibility into the health and diagnostics of the overall system. All data is captured, processed, analyzed and actionable information created to pass feedback to Operations, Testing, Development, Product management groups. For other workflows that need human intervention, Self- service is integrated between Planning tools, Orchestration and Workflow Approval tools.


With PaaS providers such as ServiceNow, businesses have ready-made, go-to solutions that help empower, govern, manage compliance, and pretty much help keep an eye on every aspect of business, giving you better throughput, saving you money and deriving maximum business value. Whether it’s IT project management, demand management, resource management, financial management, or Agile development, there’s greater certainty when it comes to service or product delivery.


Both RL Catalyst and ServiceNow augment enhanced productivity by automating mundane tasks giving businesses greater visibility and control over desired outcomes. So in effect what it all boils down to is that current IT development, business or application environments need a whole level of sophistication for optimal performance. This degree of sophistication can be achieved with advanced automation products and platforms such as ServiceNow and RL Catalyst that are helping transform businesses


BOTs – another face of automation


When you think of BOTs, they’ve actually been around for over 5 decades. But according to Kurt Wagner of recode.net, we’re increasingly hearing of BOTs because, “the technology that powers BOTs, artificial intelligence software, is improving dramatically, thanks to heightened interest from key Silicon Valley powers like Facebook and Google.”


BOTs are being developed to help people accomplish a bunch of things. From scheduling meetings to managing your email threads to perhaps even filing your taxes, BOTs are being designed to do your thinking for you. They could find multiple uses across verticals including healthcare, IT, manufacturing and more. After all the idea is to take away the tedium of manual tasks, increase efficiencies, reduce errors, and manage and improve overall performance of anything – product, service, design, tools and everyday tasks.


BOTs can be multi-faceted and perform several roles – from monitoring your CPU usage to collating all costs from spreadsheets on IT spends, bringing about greater transparency to capturing all demand requests from multiple systems perhaps and displaying them on a single dashboard, and more.


When product specialists such as RL Catalyst and PaaS providers such as ServiceNow integrate BOTs into their operating environments, businesses can rest assured that BOTs will empower teams, management and end-customers to work with greater cohesion – delivering what is needed, when it’s needed – where the results are assured, transformative and profitable.


ServiceNow: is changing the way people work by placing a service-oriented lens on the activities, tasks and processes that make up day-to-day work life, operate faster and be more scalable than ever before.


Relevance Lab: is a DevOps Specialist Company focused on making Cloud adoption easy for Enterprises by leveraging DevOps Driven Engineering, DevOps Driven ITSM and DevOps based Cloud Governance.


0

Blog, 2016 Blogs

When multiple teams and resources are involved in development and operations, getting a clear cut picture on what is happening where is not easy. Though there are many tools like JIRA, Jenkins etc that gives its own unique dashboard, there is no single consolidated dashboard that gives the end-to-end delivery pipeline view. Collecting DevOps statistics from various tools is a challenge and Catalyst has solved this using its dashboard framework, that gives a 360 degree view of development, QA, Continuous Integration and deployment.


The framework has a highly flexible and configurable GUI, where widgets can be configured for different data sources/tools. The default view has multiple widgets for a team to get the 360 view of development and operations


  • Project management – JIRA
  • Commit – Github, Bitbucket and SVN
  • Build – Jenkins
  • Quality – Sonarqube
  • Deployment – Octopus, XLDeploy and Urbancode
  • Monitoring
  • ChatOps

Each one of these widget support multiple sources and one can configure any source according to the project needs. Dashboards can be configured at 2 levels


  • Team

Team level dashboards on story management, code commits, builds and deployment


  • Product or program view

Rolled up view for the product that involves multiple teams – shows flow of commit from dev to prod environments


How it works


image

The framework runs collectors for each data source that collects and stores data from each source at a predefined interval. The aggregation layer will read these data, convert into meaningful information and send to the GUI layer and charts through the REST API calls.


Once the teams dashboards are configured in the widgets, the data is automatically collected in the back-end and the display gets refreshed every minute.


The JIRA dashboard, the team can see the features that are being developed in the running sprint – The working progress and features that are done


The code repo shows coding activities – who has committed, the trends


The Build dashboard shows the details of last 5 builds, the status of each one of these and the trends


The Quality dashboard shows the code coverage, unit tests, security analysis etc.


Deployment dashboard shows the deployment details per environment.


The framework can be extended to support any type of DevOps tools


Pipeline View


The product dashboard gives a roll-up view  of the entire product that may include multiple components of each – feature management, code repository, jobs etc. It shows the flow of commits from development to testing to production stages as a pipeline view with appropriate details of unit and functional testing. It also gives the trends on product health and performance.


image

0

Blog, 2016 Blogs

In today’s DevOps age, Continuous Testing is a critical component for any company planning on releasing new software. Releasing new software or even a new version of an existing product is often a daunting task – it requires time, it’s risky and more often than not, is not tested before release. This last bit is crucial, because it helps the company understand if the software is working properly or not. And, isn’t that the main goal of every software? A bug-plagued product is of no worth to the company.


The Business Problem:


Studies show that a product company that releases a new version every quarter faces routine challenges such as:


  • The product released had untested features and regression bugs, resulting in compromised customer experience.
  • Many issues fixed in the development stage were delaying the move to production.

Many issues fixed in the development stage were delaying the move to production.


This is where Continuous Testing comes in! Continuous Integration and Testing helps in overcoming these problems.
Wikipedia defines Continuous Testing as, “Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.


With this article, we will describe how Continuous Testing can be implemented in a product engineering group.


Role of Continuous Testing:


As enterprises continue to evolve their processes and practices – so they can reduce the time from the development to the production stage – Continuous Testing plays a key role in facilitating this.


Continuous Testing in its most advanced form is seamless integration of code repository, build, unit tests, functional tests, and deployment to stage / production server. However, organizations cannot reach the final stage in one go. They have to implement the process one stage at a time, to avoid disruption in operations during the implementation process.
Continuous Testing will also help save production costs, because a bug detected during the testing phase will prove to be less expensive to fix than one caught during the production phase.


With Continuous Testing, a software company can avail these following key benefits:


  • Early discovery of bugs and defects with the software.
  • Quick TAT, as processes are automated
  • Improvement in the quality of the product
  • Reduced production cost
  • Increased time management

Recommended Approach to Continuous Testing:


As we mentioned, for a seamless integration, it is important companies follow a recommended progression to ensure they can reap the benefits of Continuous Testing. With that in mind, below is a commended approach we would suggest, that will surely help companies improve their Continuous Testing process.


1. To improve the quality of the code, it is recommended that


  • The source code repository (SVN / GIT) is integrated with code quality tools PMD and FindBugs to ensure that the source code maintains the technical quality as per the defined baseline.

2.  To shorten the testing time for the monthly release, the following process and tools are recommended


  • The code repository is integrated with a CI tool like Jenkins. A build job defined in Jenkins is triggered every time a new code is checked in or updated.
  • The development build is automatically run through unit tests using Junit.
  • Automate the testing of stable features.
  • The automated tests – smoke tests and functionality tests – are run continuously using a tool like Jenkins.
  • The issues are reported to the development team and a new build is generated as a result of code changes.
  • The release is done once the new features are tested and product is stable

The above set of steps is repeated for every release.


image

Current Status of Continuous Testing:


In the product group, the process of continuous testing is a combination of manual and automated tests. It is implemented as follows:


  • The test sets are divided into Smoke Tests Set and Functionality Tests
  • The Functionality Tests are grouped into different batches
  • The functionality tests batches are scheduled to run alternate day
  • The smoke tests are scheduled to run every day.
  • The tests run continuously and on the test environment.

The tests sets are basically covering the regression tests and will make sure that there are no regression bugs when a new release is planned. For testing new features for every release, manual testing approach is adopted.


The product group has reduced the regression test time by 2 weeks for every release. It has also removed the manual effort for regression testing by using Selenium test automation tool.


In Conclusion:


To sum up, for continuous delivery to be a success, companies need to set up a favorable environment where the Continuous Testing model can work. Continuous Testing has been a part of the development process for years now, but with growth in technology and its subsequent evolution, Continuous Testing has come to the forefront and its importance is being recognized. The process itself has also evolved to suit the changing requirements. Hence, it is imperative that Continuous Testing be implemented diligently in software companies. Why? Because “All code is guilty, until proven innocent.” 


How important do you think Continuous Testing is? Would you recommend it or do you think software companies can afford to give it a miss? Comment below and let us know your thoughts.


0

Blog, 2016 Blogs

IT Organizations often end up in underutilizing their assets when they work on hybrid cloud, with a large  number of resources to be managed. Identifying phantom capacity and decommission unused infrastructure has become a challenge to most of the traditional large organizations, with no automated tracking in place. As an example, consider an organization that has 500+ instances in AWS. These instances could be distributed across multiple projects and each project might have many environments like Dev, QA and Prod .  During the initial days, the allocation of instances to projects/environments could be known and tracked, but over a period of time this info will become obsolete or not tracked.


RLCatalyst offers a command and control system  to bring all your assets across cloud providers into a single platform and gives a complete picture on what is allocated where.  The assets can either be managed by RLCatalyst using chef agent or can just be assigned to projects to keep track of the allocation. In case of AWS, the instances are usually tagged based on Applications, Clusters, Roles, Owner, Cost centers etc. RLCatalyst  downloads the tag information from AWS and provides a mapping to Catalyst projects thereby assigning the unmanaged instances to projects automatically. The assignment can happen regularly at a predefined interval.


A sample tag based allocation of nodes in AWS is given below


image

Catalyst also provides a near-real time dashboard of all the assets in your configured cloud provider. It gives the cost, sizing and usage/usage trend details of each resource allocated to a project. The data can be analyzed for a period of time and necessary actions can be taken to improve the utilization. This is a foundation to create ‘self-aware’ provider interfaces, thus help us to track cloud capacity and to detect and sync unmanaged instances automatically.  


0

Blog, 2016 Blogs

As enterprises have to deal with large amount of data, there evolved a need to find a better solution to keep and analyse many types of data, that eliminates the challenges posed by big data. The concept of Data Lake thus emerged, that can deal with all types of data needed to be captured and exploited by enterprises. Though data lake was tied to Apache Hadoop system initially, as enterprises got to see definite business value-add, they started creating data lakes to complement their data warehouses.


Data Lake caters to the following needs:


1.       Store raw data at a low cost


2.       Stores both structured and unstructured data in the same repository


3.       Perform data transformations and analytics


As IT organizations are extending Agile and DevOps to make the delivery  faster, the need for collaborating information from multiple sources is also growing. The source can be a DevOps tool, or a CMDB or a cloud provider. RLCatalyst is solving this by its data lake called ‘RLCatalyst Pulse’


Pulse is designed to collect data from different sources at a regular interval to get the near-real time information. The collectors will load the data lake, which will later be aggregated as per the specific requirements. The visualization layer will consume this data and will send to various portals .


image

The data lake caters to various dashboards like – Assets Tracking, Project Management, Release Management, Build/Deployment dashboard, QA dashboard, Dev dashboard and an end-to-end Operations dashboards. This gives a 360 degree view of your Assets and Process Management.  Enterprises in which the assets are allocated across different teams without proper tracking, can benefit from Catalyst Pulse, as it gives a report on the phantom capacity, thereby giving a clear picture on what is utilized/underutilized. This is being extended to build self-aware elements that can take automatic corrective actions by analyzing various metrics available in data lake.


0

Blog, 2016 Blogs

Throughout the software development life cycle, your product would go through struggles until you see it deployed in production. These challenges have been there for years and this still remain in many conventional and startup companies.


The top five challenges are:


1. Prolonged Time delays as per the fixed deployment schedules


2. Dependency on operations teams for code deployment


3. Default Hardware/software configurations required per project


4. Making CI/CD work together as one piece from end to end.


5. Huge timeline from concept to production


Online businesses are feeling that the deployment processes are more likely challenging than the past. Although there are plenty of tools and technologies available, the deployment processes are still cumbersome. The cloud computing and mobile services are making huge growth in the market, and this is one of the most important reasons why the private sector companies are considering it heavy to handle the data and the deployment processes. Not more than half a decade ago, programmers were not bothered about their code being able to work on the web or mobile applications or a mix of both. So this indeed counts on top of the existing challenges one should face upon deployments.


1. Huge timeline from concept to production


Companies take ample amount of time to bring in a concept and transforming that into a reality.It takes months or even a year. If companies are following conventional deployments, then the time delay is indeed expected


2. Prolonged Time delays as per the fixed deployment schedules


Many organizations are still following the conventional approach of scheduling deployments one after another. The deployments requests created by the projects get approved within days/weeks of schedule. This is critical to the business and brings in huge delay to make the project ship ready state.


3. Dependency on operation teams for code deployment


Operation teams are highly dependent if software projects are still waiting and depending for operations guy to get deployed. If you are one among such companies, which means you are still playing black and white movies in your theater.


3. Default Hardware/software configurations required per project


If software projects required a specific set of default hardware/software configuration, mostly IT teams would work on the requests and go through the approval cycles for hardware & brief amount of delays in preparing the software configurations.


5. Making CI/CD work together as one piece from end to end.


Although some companies think they are faster in their deployment, they are not indeed. Even if their development teams take their CI/CD solution into practice, they still need huge technical expertise to run the show. This again brings in technical expertise into picture with a big dependency on technical team to take care of the deployments even if it’s called ‘automated way’.


What RLCatalyst Offers:


By Adopting Agile & DevOps, any concept can be seen as Minimal Viable Product in action in a shorter span of weeks. Yes you read it right! DevOps needlessly waits for no one if your product is ready to be deployed. DevOps does not take huge effort to adopt or execute. With the right expertise, it can be achieved to the fullest in the shorter span of time.    


Imagine if all the above issues got solutions and if they are powered and packed in a single magic chest that too developed by DevOps Ninjas? This power packed product can shed light into your deployment challenges which can make a huge impact to your businesses with its one-click deployment


The solution is RLCatalyst, Yes! RLCatalyst can fade away all the above issues.


How it works


RLCatalyst gives the flexibility to create templates for your application , which can be re-used on any instance of your choice. The templates are not tied to any provider and you can deploy the applications by launching instances on-fly or on existing instances. This can be done by using the Jenkins jobs or by using automation libraries from Chef . Once deployed , you can upgrade or promote the build from RLCatalyst . The pipeline view will give the snapshot of how builds move from one environment to another and the latest status of each of these


 Highlights


  • Design re-usable templates for your application
  • Jobs can be created around your existing Jenkins jobs
  • Provision to use Chef libraries to automate deployments
  • Provision to do deployments on multiple instances simultaneously
  • Visualization on how builds move from one environment to another
  • Option to promote the build to the selected version
  • Support for Nexus and Docker Repositories
  • Built-in support for all Java applications

RLCatalyst is now open source and free to use. Please see more details at RLCatalyst


0