Your address will show here +12 34 56 78
2019 Blogs, Blog

The number of enterprises which adopt Automation for their daily Operations is rapidly growing day by day. Automation is capable of revolutionizing the IT service delivery and support, provided it is executed with the right set of planning and focus. Proper planning and setting the right level of expectations will result in at least one of the primary benefits – Cost Reduction, Revenue Generation, Risk Mitigation, Quality Improvement. So, what are the checklist for starting Automation journey.


  1. What is your end goal?
  2. Do you have 4 primary benefits (Cost Reduction, Revenue Generation Risk Mitigation and Quality Improvement?
  3. Do you target one or more groups?
  4. Do you have COE groups focusing on Automation for the entire company?
  5. Are your task automated in IT jobs, Service Requests, DevOps etc?
  6. Do you have some sort of automation already in place?
  7. Have you already invested in tools and technologies to support Automation

Once you have these clearly defined, you can now plan for your automation journey. As in any other initiatives to transform what you have now, Automation also should be started by analysing the existing landscape and conditions.



  • Current workload Analysis
    Your current workload could be in terms of number of jobs, tickets, requests, calls etc. Collect as much as data as possible and group these into various categories. Most of the data would be non-standard but even an excel-based filtering and sorting will give a good idea about where your team is spending more effort and money. Go with the 80:20 rule to pick your candidates for immediate automation and design your Service Catalog around these.
  • Right Process Identification
    Most analysis will result in revelations about incorrect or inappropriate processes being followed for most of the existing workflows. Brainstorm with the concerned team and define/standardize the process flows – including stakeholders involved, approvals required, exceptions to be handled. The Service Catalog and the Process flows will result in a more self-service centric IT delivery system.
  • Plan Execution
    Start by segregating the automation candidates into
    • Start Small – Few cases may start showing immediate results in automation achieved. Start with one small area like Account Creation or Password Resets.
    • Self-Serve Automation – Next focus can be shifted to the cases which needs a shift from generic incidents to self-service requests. Once the shift is done, these can be automated in the 2nd phase of your automation journey
    • AI Based – These could be the cases which needs patterns to be analyzed from the collected data and intelligently handled in automation.
  • Training & Communication
    Automation doesn’t yield the same results as it is intended to be unless used and in the right way. All the parties involved from the end-users to IT should be communicated upfront about the plan and adequate training to be part of the overall execution plan. The actual benefits of automation to be demonstrated to each group in terms of time or effort savings.
  • Feedback and Improvements
    Automation is not a one-time project. It has to run in feedback cycles to find out more exceptions and to add those to the backlog. A systematic and regular audit of the automation results to be done to validate against the expected outputs. Organizations will adopt new technologies, tools and application periodically which can be part of automation scope in next phase. Relevance Lab’s AI driven Automation Platform comes in with pre-built library of Automation BOTs for mundane tasks in IT in areas like Identity & Access Management, Infrastructure Provisioning, DevOps, Monitoring & Remediation.

Please get in touch withmarketing@relevancelab.com for more details on starting your Automation journey.


About Author
Ashna Abbas is Director of Product Management at Relevance Lab.She is a Software professional with 12+ years of experience in product development and delivery.


RelevanceLab ITSM Automation DevOps

0

2019 Blogs, Blog

You need the experience to get experience. Most job seekers out of university can relate to this when they’re looking to land a specific full-time role. One way by which this is possible is working for a particular period of time that serves as a springboard to help them develop expertise as well as come to grips with how individuals conduct themselves in the corporate environment.


An internship, as they call it is the perfect response to the “you need experience to get experience” conundrum that job seekers face. It also helps employers source a steady stream of talent that could help spur innovation within the organization itself.


In other words, offering benefits to both parties. This includes those who are seeking employment as well as corporations or small businesses that are looking for skilled, sincere employees.


What You Need to Know About An Internship


So, what is an internship? Why does it matter, in today’s job market?

An internship, simply put, is a period when you work with an employer, in a paid or unpaid role, in order to gain valuable work experience in the role of your choosing. This helps you gain experience that will help you transition into your desired role.


While it is common knowledge that getting a relevant and skill-based degree can increase your chances of being gainfully employed, completing an internship can substantially raise your chances of finding employment sooner than most.


In fact, statistics reveal that undergraduate students who complete at least one internship during their time at university tend to fare better when it comes to finding full-time work in the near future compared to those who do not.


Speaking of the future, an internship provides students with a taste of what is to come before they finally get their first job at a corporation. Not only will you learn how to perform the tasks necessary at your assigned role but you will also get the first-hand experience of working in such an environment.


Internships Offered at Relevance Lab


This is precisely what we offer at Relevance Lab too, where we source talent from a variety of universities around the country. Not only do interns benefit from obtaining relevant experience and expertise but our senior employees are able to hone their leadership skills through training programs.


Interns get an overview of the corporate work culture, business workflow and how their learning gained from time spent at university is implemented in the real world. Some of our interns have been recruited from reputed engineering colleges like AMC Engineering, MIT Manipal, Amity, NIT Surathkal and IIT Kharagpur.


Some of the projects that our interns have been involved with include:

  • Bill of Material explosion at Scale using Spark/Scala
  • Business Intelligence reporting by downloading Google Analytics
  • Google API at scale
  • Inventory Health Dashboard for supply chain analytics

That said, there are a slew of benefits that are on offer to both an employer and a potential employee if the latter is shown to prove his or her skills during the internship period. Even if some interns will have to look for work elsewhere, there are still several benefits to both parties and which we will address next.


Our Internship Program – Benefits


So, what are some of these benefits?

Apart from organizations utilizing a steady source of fresh talent and freshers being able to get important work experience? Yes, there are several more and which is why interested individuals must seriously consider applying for an internship with us.


In particular, some of the benefits of applying for an internship with Relevance Labs include:

  • Edge in the job market
  • Gaining valuable work experience
  • Developing and refining skills
  • Networking with professionals in the field
  • Effortless transition into a full-time position

As for how our internship program benefits Relevance Labs itself, these include:

  • Locating a steady stream of new potential employees
  • Increased visibility on college campuses
  • Test-driving the talent
  • Obtaining a fresh perspective on old problems
  • Fostering leadership skills in current employees
  • Enhanced Social Media reach & Brand awareness

Of course, if you want to get started towards a successful career in the specialized IT services that we offer, you have to have experience to get experience, right?


About Author


Sampriti Banerjee is Marketing Executive at Relevance Lab.

0

2019 Blogs, Blog

There are occasions when one feels fulfilled and has a sense of accomplishment. Recently, I had such an experience and hence thought of penning down my thoughts here.


Access to clean water and sanitation is one of the biggest problems faced in India. One can either complain about it or take some decisive action. So, as a part of our Corporate Social Responsibility (CSR) initiative, my organization (Relevance Lab) decided to contribute towards hygiene and education as key themes.


We partnered with Child Help Foundation (CHF), an NGO that has a pan-India presence and works in the best interests of children in areas such as education, health, food, and shelter.


With our contribution, CHF took up a sanitation project and built two washrooms at the Government Lower Primary School in Guttahalli, in Karnataka’s Kolar district. We also contributed towards commissioning a rooftop water tank to ensure uninterrupted water supply and installed a UV-based water filter.


We enjoyed the drive to scenic Guttahalli, which is about 50 km from hustle and bustle of Bangalore. The place is known for its ‘silk and milk’ heritage. We were overwhelmed by the hospitality of the teaching staff and student community.


Officially, the project was inaugurated by the schoolchildren with the assurance that they would follow the recommended hygienic practices. They were so excited and enthusiastic that all of us felt very motivated. We distributed sweets and shared some toys with them.


The entire experience was both touching and motivational. The sparkle in the children’s eyes gave us a sense of accomplishment. It’s going to motivate me for the rest of my life – to do something good for a social cause!


That day, I also understood that as balanced, engaged, sustainable, and matured entities, organizations need to show their commitment towards important economic and social causes and contribute to the best of their ability. After all, businesses cannot be successful when the society around them fails!


About Author

Neeraj Deuskar is the Director and Global Head of Marketing for the Relevance Lab.

0

2019 Blogs, Blog

In this era of digital transformation, organizations tend to be buried under a humongous amount of data or content. Websites form an integral part of any organization and encapsulate multiple formats of data, ranging from simple text to huge media asset files.


We see many business requirements to regroup/reorganize content, consolidate multiple sources of data, or convert legacy forms of data into new solutions. All these requirements involve content migration at its own scale, depending on the amount of data being migrated.


A common use case in any content management solution is how to move heavy content between the instances. AEM involves various methods, such as vlt process, recap, and package manager. Each option has its own pros and cons, but all of them have a common disadvantage: content migration takes a lot of time.


To overcome this, the latest versions of AEM have started supporting Grabbit as one of the quickest ways to transfer content between Sling environments. As per the AEM 6.4 documentation, there are two tools recommended for moving assets from one AEM instance to another.


Vault Remote Copy, or vlt rcp, allows you to use vlt across a network. You can specify a source and destination directory and vlt downloads all repository data from one instance and loads it into the other. Vlt rcp is documented at http://jackrabbit.apache.org/filevault/rcp.html.


Grabbit is an open source content synchronization tool developed by Time Warner Cable (TWC) for their AEM implementation. Because Grabbit uses continuous data streams, it has a lower latency compared to vlt rcp and claims a speed improvement of two to ten times faster than vlt rcp. Grabbit also supports synchronization of delta content only, which allows it to sync changes after an initial migration pass has been completed.


AEM 6.4 and Grabbit – possible?


We see a lot of questions in Adobe forums and TWC Grabbit forums asking if AEM 6.4 really supports Grabbit content transfer. The answer is yes!


Let’s look at the steps that needs to be followed to use Grabbit in an AEM 6.4 instance to make it work across environments.


STEP 1: Install the following packages. Ensure the Grabbit package is installed at the end.


1. Sun-Misc-Fragment-Bundle-1.0.0


2. Grabbit-Apache-Sling-Login-Whitelist-1.0


3. Grabbit-Deserialization-Firewall-Configuration-1.0


4. Grabbit-7.1.5


STEP 2: Add the twcable Grabbit package to the Sling login admin whitelist – com.twcable.grabbit


STEP 3: Adjust the Deserialization firewall configuration in the OSGi console.


Ensure the following items are removed from the blacklist:


org.springframework.beans.factory.ObjectFactory
org.springframework.core.SerializableTypeWrapper$MethodInvokeTypeProvider


Ensure to add this in the whitelist


org.springframework.


Debugging


To ensure Grabbit is successfully installed, try hitting the Grabbit URL to fetch the list of transactions or jobs (http://<host>:<port>/grabbit/transaction/all). If this returns an empty list , Grabbit is successfully installed and ready to be used to send or receive data.


While running Grabbit.sh in Windows, to initiate the Grabbit job, if anything goes wrong, it is difficult to get the error details, as the command window closes immediately. One cannot see the error code returned from the job. To get the error code/message in the command prompt window, comment out the Clear command inside the Else block of the newGrabbitRequest() function in grabbit.sh.


This will help you review the errors and resolve them effectively.


We have been successful in migrating content between AEM 6.2 and 6.4 instances and between 6.4 AEM instances using Grabbit.


Try these steps to install and use Grabbit without any hiccups, and make your content migration quick and smooth.


Enjoy your content migration!


 About Author


Saraswathy Kalyani is an experienced AEM and Portal consultant at Relevance Lab.


0

2019 Blogs, Blog

ChefConf is a global gathering that allows the DevOps community to learn, share ideas, and network. By sharing real-world examples of how organizations solve problems to deliver business value, ChefConf is all about tactics, strategies, and insights for transformational application delivery organizations.


Relevance Lab was a Silver Sponsor at ChefConf 2019 in Seattle. As a strategic Chef Partner, Relevance Lab provides end-to-end Chef Factory solutions to deliver real business value to enterprises, helping them build automated, well-governed, secure, and industry-compliant cloud environments.


At the event, Pradeep Joshi, Senior Director of DevOps at Relevance Lab, was interviewed by Chris Riley of Digital Anarchist, an all-new video platform from the MediaOps family of brands. Pradeep spoke about Relevance Lab’s presence in the DevOps domain for the past eight years and the various services that the company offers.


As a Chef Partner for six years, and focused on DevOps automation, Relevance Lab offers services such as infrastructure automation, configuration management, and continuous deployment, among others. Pradeep explained how DevOps has transformed businesses over the years. Projects used to start with hardware procurement, move on to application capacity planning, and go around in cycles. Things move a lot faster now as hardware is much more affordable and the cloud offers servers in a matter of minutes. At the same time, the mindset of people and the culture of organizations have also changed. From senior management to lower-level employees, people have been more accepting of these changes.


Pradeep reaffirmed that automation is key to the success of organizations across the world. According to him, “what to automate” is a tougher decision to make than “how to automate”. For instance, when there are different teams (IT, software, applications, database, production support, etc.) working together, Excel sheets, emails, and chats among the groups could delay processes to a large extent. When there’s a need for faster deployment or when there are configuration changes for the production team, people are skeptical about how such tasks can be automated. Primitive and inefficient ways slow down processes, and this is an area where products like Chef help automate processes through code. Relevance Lab advises its clients that infrastructure, security, applications, and compliance should be code. All of this can be achieved with automation.


On being asked how the automation idea began for Relevance Lab’s clients, Pradeep said it all started with a problem statement. There is always a need or a problem to be solved, and Relevance Lab is keen on understanding the exact problem. Is shipping the applications a major issue? Is managing configurations more taxing? Is there a cultural block in the organization that makes employees resistant to change? It is natural for employees to feel some anxiety while moving to the cloud; they often feel it is not safe to put production data on the cloud. According to Pradeep, this mindset should change and evolve with frequent meetings, discussions, and constant mentoring. Employees need to understand the benefits of moving to the cloud; they should be more agile, be favorable to market changes, and eventually get used to the new ways of doing things.


With affordable infrastructure available at the click of a button, people should start thinking from an application point of view, such as: what does my application need for deployment? After procuring hardware and scripting automation, Pradeep says the next big change is going to be all about doing things more intelligently. In this regard, Relevance Lab has come up with a new framework called BOTs that enables automation of mundane tasks such as password reset, user creation/deletion, and data backup.


Pradeep concluded the discussion by emphasizing on the growing need to separate the tasks that need to be done by humans and those that can be automated. After all, automation allows an organization to get a lot more things done in a day, ultimately boosting efficiency and enhancing productivity.


(This blog is based on the video interview taken up during ChefConf 2019 by MediaOps and the link of the original video can be found here)


Click here to View Video
Video Courtesy: Digital Anarchist

0

2019 Blogs, Blog

As part of its application development services, Relevance Lab has partner with ServiceNow to implement its new Enterprise DevOps offering. This partnership enables “intelligent software change management” based on data inputs from various underlying DevOps tools. 


Enterprise DevOps is a collaborative work approach that brings a tactical and strategic balance to businesses. Relevance Lab’s expertise in implementing automated pipelines around Infrastructure Automation, Configuration Management and Continuous Deployments will help in implementing end-to-end solutions to customers as they embrace ServiceNow Enterprise DevOps.


The initial release of ServiceNow Enterprise DevOps has elements that provide for some specific use cases. 


Integration: Out-of-the-box integrations to standard tools in the DevOps toolchain is one among the primary use cases. Planned examples include GitHub, GitLab, Jenkins, Jira and accessing data from ServiceNow Agile Development (Agile 2.0) and other ServiceNow products.


Automation: The first use case in automation will be to leverage data from integrations to connect to ServiceNow ITSM Change Management. This will simplify the use of Change Management features and APIs to assess changes from the DevOps pipeline and to automate them where appropriate. Change approval policies will be a core component of this automation. Refer to this blog post for more information—the DevOps product will add an out-of-the-box capability to the whole process.


Shared Insights: With end-to-end perspectives of the DevOps toolchain, there will be unique insights into development and operations. This includes information for developers on production data and information for services on changes such as the ability to trace a switch back to the changes in the original code commit and report on test runs.


0

2019 Blogs, Blog

relevancelab:

As a new way of building high-quality analytics systems, Agile analytics promotes maximizing of business value in functional areas like marketing, operations and supply chain. With practices for project planning, management and monitoring, Agile analytics enables effective collaboration with clients and stakeholders and ensures technical excellence by the delivery team.

Like Agile software development, Agile analytics is based on certain principles. It is not a rigid methodology but a development style that emphasizes on the client’s goals to make better decisions using data-driven predictions. According to Gartner, “Analytic agility is the ability for business intelligence and analytics to be fast, responsive, flexible and adaptable.” It is a continuous process of iterating, testing and improving the analytics environment.

Agile analytics includes practices for project planning, management and monitoring in order to have effective collaboration in the business process.

The three themes that any supply chain enterprise needs to focus on in order to adopt the Agile approach are:

  1. Speed

Speed is a stepping stone in the Agile process. Firstly, one must identify business and technology challenges and plan the appropriate methodology to mitigate roadblocks. Agile methodology assists in extracting large volumes of raw data from multiple sources and transforms it into meaningful business information. Data insights are becoming rapid with machine learning techniques that make insights available in weeks rather than months. This helps in cost optimization and qualitative workflow management.

 2. Flexibility

Flexibility refers to the adaptability to changing business needs. The important points to consider are data exploration, visibility and usability of data. In Agile analytics, projects are developed with short iterations so that at the end of one iteration, the result achieved can be displayed and the user can see a working version of the software before moving to the next iteration; this way, the overall project will be much more flexible. This methodology is flexible in terms of time, scope and quantity of the project work.

 3.  Responsiveness

Responsiveness is a call-to-action process. In this theme, one identifies a new business problem through predictive modeling of the available data. It gives a preview of certain business risks that may happen while working on certain technology, tools and software. Agile analytics methodology is a modern self-service model to handle the entire IT landscape. Therefore, IT can be more responsive to the needs of the business and more proactive in supporting and scaling the necessary infrastructure.

                              Agile Analytics Use Cases

Global Traceability Solution

A common problem faced by global pharmaceutical manufacturers is the challenge in managing and restoring fragmented data. The supply chain involves a large number of stakeholders and a complex process from manufacturing to shipping to the end user. To overcome this hurdle and increase productivity, the latest ERP systems could be adopted in the process.

  • Real-time material mapping,batch genealogy and chain of custody information would provide an overview of the end-to-end material flow from purchase order to plant to distribution to shipment.
  • Context-specific visualization and drill-down ability could assist in converting complex processes into a simpler format for better workflow management.
  • Google-like search over all  product and batch elements enables real-time tracking of products, system  and location. This would assist in identifying repeated issues in products.

 The business outcomes of these processes are increased transparency, more effective cost management and lower times to insight.

Inventory Management Solutions

 

In any manufacturing industry, inventories play a crucial role. Managing inventories involves complex issues like maintaining stocks in terms of quantity and quality, data integrity and end-to-end visibility of inventories. To overcome this hurdle and get better output, manufacturers can adopt end-to-end processes.

  • Value Stream Mapping is a lean management method for analyzing the current state and designing a future state for the series of events that take a product/service from the beginning through to the customer with reduced lean wastes. This helps to identify potential failure points, systems and data needs.
  • Data Modeling is a detailed visual representation of your databases with many contexts for different stakeholders with different perspectives of working with data. It is where business and data align. This assists in identifying the correlation.
  • Root Cause Analysis is an approach for identifying the underlying causes of an incident so that the most effective solution can be identified and implemented.
  • Predictive Modeling is a  process that uses data mining and probability to forecast outcomes.

These approaches help manufacturers reduce failure of inventories, automate manual processes and increase accessibility to multiple data sources.

Traits like early delivery of production quality, delivering the highest-valued features first, tackling risks early and continuous stakeholder and developer interaction determine true agility. By continuously seeking and adapting to feedback from the business community, Agile analytics teams evolve toward the best system design. With Agile analytics, companies can balance the right amount of structure and formality, with a constant focus on building the right solution.

About the Author: 

Sampriti is Marketing Executive at Relevance Lab.

0

2019 Blogs, Blog

When a digital product is developed in and for a particular geography and market, the enterprise and developer/architect function is focused on getting it to market first. Once it matures and receives greater engagement, enterprise is then looking to continuously add and develop a stable product that is feature-rich, scalable and robust. So, when the opportunity to take the product global arises, the design and development of the product encounters a whole set of challenges that is not accounted for in the initial stages of development. This is when the localization-versus-internationalization challenges take root.


We have compiled a few quick hacks that you can use as your checklist for a smoother transition.


Let’s first define internationalization and localization. As per the World Wide Web Consortium (W3C), localization is defined as “the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a “locale”).” Internationalization is defined as “the design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language.”


So, what are the typical factors that teams of designers, architects and developers need to first establish, foreseeing internationalization?



Legal and Regulatory Guidelines

Different regions, geographies, countries and markets have different regulatory guidelines that can extend to several facets including organization practices, trademark requirements, currency repatriation, tax compliance, regulatory compliance, duties and corporate agreements and contracts, currency repatriation and so on. If you’re working on a product that you need to go-to-market in China for instance, then it is important to understand the legal and regulatory framework within which you can operate.


Hosting Location, Global Distribution/Content Delivery Network Infrastructure

Different geographies, different networks and different internet speeds are factors one must consider when taking your product global. How can a product be designed to ensure that it “loads” quickly for customers across regions? Should there be a central database or should it be distributed? Who, where and how will distribution of data be managed? These are critical questions to ensure there are no customer drop-offs.


Unicode Support

Today, Unicode allows support in most of the world’s writing systems. Enabling its proper usage that can support local, regional or even cultural context is critical. For instance, supporting special characters from different languages that are outside the ASCII boundaries (defines 128 characters in a typical English keyboard) is extremely important. Unicode defines 221 characters (characters from all recognized languages in the world), thus, UTF-8/16 encoding format should be supported when you’re planning to release your product in different geos. This is a crucial requirement for database, DB connection, build setup and server startup options—just to name a few.


If you’re fond of hard coding text, think again. Creating text and media that is easily editable, gives you the flexibility to easily localize and adapt your product for different regions.


Multi-Lingual Support

English may be one of the most spoken languages in the world, but that doesn’t necessarily mean every region or market thinks, reads and speaks English. For instance, if you’re releasing a product in Japan, China or even India (with its plethora of scripts for different languages), multi-lingual support for your product is essential. Here are a few things to follow:


  • Maintain language-specific resource files for each support language (eg. en-us.properties, ar-sa.properties) with key value pairs—the key is used in the code to place the text, and value is the language specific translated text.
  • All text on the screen, messages and label text must be sourced from resource files, and there should be no referencing of text directly in the code.
  • Choose to keep the resource files at the back end/front end. Front end is recommended as it helps improve performance, and by avoiding calls to the back end and toggling, it moves fast.
  • UI framework should support translation features.

Think about text expansion in different languages, in terms of number of characters and how it can affect the UI and UX of your product. The same holds good for languages that are written from left to right, as well as translations.


Scalable Framework for Geo-Specific Customization

Development framework must consider the UI/UX, branding, orientation and size of the product when going i18n. For instance, the color red could mean different things in different countries and cultures—in China it could mean endurance, in India it could mean purity, in Europe it could mean self-sacrifice or in South Africa it could mean grief and sorrow. Therefore, it is extremely crucial to understand different nuances of cultural significance while designing the UI for a great UX. A few other factors to keep in mind:


  • Tag-based framework for content picks up where the content is tagged to language/country, and gets picked for users having matching profiles, for the tag values. This way, you may have content for various languages, but a user in Spain gets to see only the Spanish content just by setting his language preference in your system.
  • Orientation and sizing adjustments—specific CSS to handle alignment/size specific customization.
  • Navigation (left to right), enable/disable certain options through CSS customization.
  • Localization-based content.
  • Language-based content.

Developing a framework for rapid transition to international markets requires a thorough think-through from a product enablement perspective, keeping in mind operational efficiency without impacting product behavior. Using this checklist will help you save time, money and rework when you finally decide to go i18n.


About the Author: 


Ruchi is the Director- Solution Architect with Relevance Lab. She has around 20 years of experience in leading execution of projects for various customers in technical leadership roles. She has been involved in designing, implementing and managing enterprise grade, highly scalable i18n solutions catering to multiple geographies with diverse needs.


(This blog was originally published in DevOps.com and can be read here 


https://devops.com/is-your-digital-product-ready-for-international-prime-time/ )


Source: Devops.com

0

2019 Blogs, Blog

In last year’s Google I/O conference, when Sunder Pichai showed the demo of an AI assistant that can schedule appointments, make calls on our behalf, book tables at a restaurant etc, all the imaginations about AI came into reality. It felt as though Pichai was talking to the Genie of Aladdin who fulfils day-to-day mundane operational tasks that can be made simpler with the help of Artificial Intelligence. Similarly, the mission of AIOps is to make the job of IT Operations simpler and more efficient.


According to Gartner, AIOps—the “Artificial Intelligence” for IT Operations—is already making waves in the way IT Ops team work. One of the important use cases of digital technologies is how AIOps is becoming pervasive in the IT world and how it is transforming the traditional IT management techniques. While digital transformation enhances Cloud adoption for enterprises, there is an increasing need to manage “Day 2 Cloud scenario” more efficiently in order to realize the true benefits of cloud transformation. AIOps helps IT Operations teams automate and enhance their operations using analytics and machine learning. This enables them to analyze data collected from various tools and devices and predict and resolve IT issues in real-time, ensuring that business services are always available to business users. This is important for any organization that operates in a “service-driven” environment.


 Here are various components of AIOps:


1)  Data Ingestion: This is a core capability of any AIOps tool. These tools process data from disparate and heterogeneous sources. The principle of AIOps is based on the technique of machine learning and data crunching. Hence, it is important to ingest various datasets available that determine the key success parameters for IT operations. They involve data collected from various performance monitoring tools, service desk ticket data, individual system metrics, network data etc. Due to the voluminous and exponentially increasing nature of data, it is very difficult to track all these datasets manually and determine their impact on day-to-day IT Operations.


2)  Forming the Model/Anomaly Detection: Once the data ingestion layer is created, the next important aspect of an AIOps system is the ability to form a model of what is normal. Once the system forms this, the capability of Anomaly detection can be built on that. Any parameter that deviates from normal can be flagged as an anomaly that could lead to outages, hampering the availability of business services. Machine learning can be applied for Anomaly detection. However, it is best applied to specific use cases where patterns and actions are repeatable. This is the step where self-learning capabilities are injected in the system.


3)  Prediction of Outages: Once the system starts determining what is normal and what is an anomaly, it becomes easier to predict outages, performance degradation or any other condition that affects the overall business built on the model. For instance, an increase in database queue sizes could lead to increase in transaction times for online payments, which could in turn lead to abandoned items in shopping carts. The AIOps tool should be able to predict such a pattern and flag it.


4)  Actionable Insights: The system should be able to look at past data of actions taken and provide recommendations on the possible actions that can prevent downtime of business services. Past actions could be tracked via the ITSM tickets created for past incidents or through knowledge-base articles that have been associated with those tickets.



One of the important use cases for AIOps that we have been implementing for our enterprise client is storage management. In a typical production environment, IT Operations team would get an alert when the disk is close to its full capacity. As a result of this, the responses from a particular node become weaker. Through intelligent monitoring and correlation analysis, the exact reasoning can be determined and the storage capacity is automatically adjusted by adding new volumes proactively and functioning of that node can be restored to normal level.


There are other use cases of AIOps in capacity management, resource utilization etc. which could make the life of IT Ops team much simpler. The day is not far when a CIO takes the avatar of Aladdin and the Genie shows up in the form of an AIOps tool.


About the Author:


Sundeep Mallya is the Vice President and Head of Engineering for RL Catalyst Product at Relevance Lab.

0