APM Digest recently posted an interesting article discussing the differences between agent-less and agent-based monitoring. I personally can’t count the number of times we’ve heard customers tell us that agents were out of the question. And I can’t blame them, as agents have previously garnered a bad reputation, and the appeal of monitoring without the overhead of an agent is compelling.Unfortunately, there are many scenarios where agent-less monitoring simply cannot access the data needed to achieve good visibility. That’s why we advocate a mixed approach to monitoring that combines our agent-less and agent-based data collection techniques. It’s also why FireScope has taken a different approach to agents that alleviates the many issues that have given agents a bad name.
Having used monitoring suites from HP, BMC and others at previous employment, I can sympathize with customers who loath agents. First, it’s more work on administrators as they are just one more thing that has to be deployed, configured, updated and tweaked. And in the case of a couple of the big vendor suites, multiple agents have to be installed on the same systems due to a lack of integration between products in the monitoring suite. Then, there’s the resource overhead of the agents, resources that must be diverted from business-critical applications. In fact, I can remember quite a few instances where agents had to be restarted or killed because they were actually interfering with the applications they were supposed to be monitoring (in a few of those cases, the offending agent was ‘accidentally’ permanently disabled). So, when I hear a customer balk at agents, a little part of me has to smile. But all too often, when we dig into their visibility requirements, we find that agents become an absolute necessity.
When we designed the agents for FireScope, many of us still had fresh scars from dealing with agents from many different vendors. Therefore, we tasked ourselves with ‘Never being that guy’ and this led to a couple of key design decisions:
Recently, one of our customers did an analysis of their production FireScope implementation. As part of this analysis, they did a long-term study of agent performance across thousands of servers. In their words:“For detailed monitoring of servers, FireScope supplies an agent (for Windows, Solaris and Linux) that is a well behaved application, in that it has minimal impact on the actual host. Testing under windows shows less than 1% impact on the host. Memory usage can range up to 100 Meg, but this depends on the number and type of attributes collected and monitored by FireScope.”While this particular customer wasn’t very enthusiastic about using agents for monitoring, over time they found that our ‘well behaved’ agents provided good value and visibility without the problems they had previously experienced.They also found FireScope’s Agent Remote Command a particularly useful feature. Basically, as events are identified on a host, the Remote Command capability can execute commands to potentially remediate the issue. In one scenario, they had a .Net web application that suffered from a memory leak that periodically ground IIS to a halt. While their development teams worked to trace the source of the lead, their operations team setup FireScope to automatically execute IISReset whenever the application exceeded memory thresholds, effectively automating resolution of the issue.
As is mentioned in the APM Digest article, every organization has unique needs when it comes to monitoring. That mix may or may not require agents, but don’t let fear of rogue or hungry agents prevent you from getting the visibility you need.