Q9 Networks

Published January 3, 2017

Telemetry Enables Better Business Decisions

Article, Cloud Computing

In any business situation, having quality data and being able to interpret it can mean better decision-making and improved business results. This is no different when applied to managing IT facilities and workloads, where having the right data and being able to interpret it can result in a better understanding of the load and growth of your IT needs, allowing better planning of IT investments.

From a planning and architecture perspective, understanding the impact and characteristics of IT workloads provides opportunities for consolidation and efficiencies of scale. Operationally, better telemetry increases the ability to troubleshoot, refine and performance-tune your VMs and workloads, leading to improved IT and business functions that can reduce costs per transaction and create streamlined business processes. Telemetry and logging is not new to IT, however, the amount and detail of data and the insights we gain from it, are much greater than they’ve ever been. Between cloud systems and artificial intelligence that can now be applied to telemetry data to give us better insights, the value that this kind of data can provide to businesses is only increasing.

Placing Value On Telemetry
The most basic and impactful way telemetry can help is by providing the ability to proactively manage risk and to predictively operate your total facilities as well as individual applications or workloads. Rich data tied to the performance and load of your IT facilities as it relates to hardware power status, temperature and environmental warnings and IT system status (e.g. CPU load, RAM occupancy, network saturation and performance, storage saturation and performance, and any alarms or failures on such elements) can allow you to move to a just-in-time model of IT growth. Rather than having to spend capital and effort on growing your infrastructure based on assumptions, you can now base your future needs on real-world data.

Having quality data and analytics on individual workloads, also means that you can get quite predictive with your actual applications, and not just with the hardware that it runs on. This means being able to avoid software or application failures, better tune availability to key services, and generally, reduce or eliminate unwanted downtime and support issues.

Getting a Front Row Seat
Building a good view across both hardware resources as well as IT workloads, be they virtual or physical, requires planning out an architecture that includes hardware and facilities that provide good telemetry. Vendor or technology choices can provide great levels of detail such as basic network data from an SNMP tool, status and health alerts from hardware and information from IT operating systems in terms of applications or CPUS and how they are affecting load or resource consumption. Try to understand the capabilities you need and want up-front. Additionally, you’ll need to tie this data into some sort of tool or system that can help you make sense of all the different data points available to you. This will help you interpret what it all means and what actions you can drive out of the intelligence.

A Look On The Inside
For a company to be successful in its data efforts, the task is best owned by the people who can realize the most value out of good telemetry and analytics – this would be the operations teams who have to keep your cloud systems running. Putting direct data and insight into the hands of the people actually making the changes is always best. Primarily it’s the IT and Operations people who will get the most value out of this kind of insight. However, using this data to build better financial and procurement planning around IT is an immediate “next-step” for most businesses, as the benefit can be significant.