Choosing the right monitoring tool for your infrastructure is vital to a company’s success. A lack of effective monitoring can increase downtime, harm revenue, hinder SLA performance and ultimately impact negatively on a company’s reputation.
With this in mind, CloudTweaks looks at three essential features for any monitoring tool…
1. Data Aggregation
With the rapidly increasing use of server auto-deployment, businesses are facing a situation where they may end up with hundreds of thousands of servers all networked together to accomplish a task. If you are running such a large network of servers, the metrics of individual servers become less important and it’s both more significant and more interesting to see statistics across the whole network.
A problem arises when trying to use traditional monitoring software to understand the output of a large number of servers. If developers or system administrators look at the same metric for all of the servers, they’ll be presented with a confusing and unusable chart that gives very little insight into how the aggregated group is handling a task.
Datadog uses a metrics aggregation server to unify many data points into a single metric for a given interval of time which can then be presented on charts, graphs and dashboards. It does this by accepts custom application metrics points over UDP and forwarding the information to the Datadog software. The developers have specifically chosen UDP because it is a fire and forget protocol – meaning an application won’t stop running while it waits for a response which is important if the server is inaccessible.
2. Choosing Which Servers to Aggregate
One of the most significant challenges of data aggregation is understanding and deciding which server’s data to aggregate together. Without effectively grouping together the right servers, it becomes impossible to correctly dissect and interpret the outputs. An effective data monitoring tool will allow its users to import information from a diverse range of sources to help make those decisions.
Datadog’s software enables its customers to inherit tags from a broad range of systems, from online data centres such as Amazon Web Services to configuration management systems like Chef or Puppet. It will also allow a user to set up custom tags that use a multi-faceted tag-matching system to help them determine which servers should be reported in which aggregated chart.
3. Rank and Filter Performance Metrics
Despite the obvious importance of holistic data monitoring through effective aggregation, it is often necessary to observe individual aspects of an infrastructure’s performance. Problems occur, however, when trying to use over-complicated charts and graphs. A throughput graph which has been broken-down by processes will show hundreds of individual plots and is nearly impossible to interpret, whereas a heat-map of individual time series is not useful for tracking the role of a single process.
Datadog offers a way to quickly identify and monitor the metrics that a user is interested in seeing with their ‘top()’ family of functions. The tool gives customers the ability to rank, filter and visualise performance metrics over a given time period, allowing you to easily see information such as the highest peak values, the largest sustained average values, or the highest most recent values for a dataset.
For more information on Datadog’s extensive features, head over to their website to register for a free trial.
By Daniel Price
Post Sponsored By Datadog
Latest posts by Daniel Price (see all)
- Industry Expert Says Cyber-Security Is Not Fit For Purpose - February 26, 2015
- 3 Ways The Internet Of Things Is Effecting The Design Of Our Cities - February 24, 2015
- Venture Capitalists’ Growing Interest In The Internet Of Things - February 11, 2015