Mutiny Limited was founded at the very end of 2000 as a spin-off from Manchester University's "Manchester Computing" laboratory. The focus of the business is to develop and bring to market, software that comprises the "Mutiny Critical Services Monitoring Appliance" and associated products. The company spent its first year based at Manchester University before successfully securing venture-capital funding at the end of 2001. The offices moved to London and the company embarked on a three-year joint venture with Toshiba Systems which led to Mutiny supplying software to power the "Toshiba Network Monitor". Following Toshiba's major restructuring in 2004, Mutiny began to supply network-monitoring systems through the IT reseller channel and directly to end customers in the UK and Germany. The business model today evolves around four product offerings including Solo Appliances and a "White Label" core technology integration for other IT management platforms.
The heart of Mutiny is a dynamic polling engine that it has been created in house. Its function is to gather information from the network-attached devices using a variety of protocols (such as ping, SNMP, direct IP etc.), compare these against expected values to define a per-property status (OK, Warning, Critical) and then store the data to disk in an easily retrievable format for historical reference and graphing. The polling engine has been designed to collect all the data from any size of network in less than one minute - the only limitations being the hardware (or virtualised hardware) of the Mutiny Network-Management server. However, in cases where there are bandwidth restrictions or SAN performance is not able to keep up with the volume of data, Mutiny can also operate in a distributed array of Master/Slave servers.
The majority of data that Mutiny collects is from SNMP and it is in this area that most of the optimisation work has taken place. From studying the real networks of our customer base, we have been able ascertain the best methods for polling using SNMP. The polling engine works by grouping the devices to be polled into blocks of around 9 to 10 systems. These systems are then polled by working out which of them has the slowest SNMP response time. The ones with the slowest response time are then put to the front of the polling cycle for the next poll and those that respond quickly are polled later. This gives slow responders a full 60 seconds to respond via SNMP before the next polling cycle commences. By using this method the Mutiny system is able to poll much larger networks without needing to increase the overall time of the polling cycle. Also, by using a controlled multi-threaded Java architecture, the resources required increase substantially less than linearly.
Other experiments have shown that it is highly desirable to keep the polling time down to as close to 45 seconds as possible and to allow for slow responders by staggering the poll-start time. However, even putting these two factors together, it can be that the time taken to store that data pushes the overall polling cycle to be unacceptably long. The solution Mutiny uses is to write the data to disk during the time when the polling engine is sending and receiving SNMP information via the network interface. At this time disk activity is low, so by storing the data in memory until the end of the cycle and then writing as much as possible to disk, without extending the polling cycle over one minute, the remaining data can be stored before the start of the next polling cycle. We have also redesigned the format of our round-robin data files that are used to store historical data in order to make access quicker and allow us to more easily cash the header information.
Mutiny has been designed from the outset to be simple to use. The user interface is based around Web Standards and is comprehensive yet intuitive to use. The monitored estate can be infinitely divided into customisable views that provide controlled access to the data in any format that is required: device-based; map-based, rack-based, service-based, customer-based, table-based, event-based etc.
When adding devices, a sophisticated automatic-discovery process is used to identify the type and classification of each so that the correctly-tailored monitoring menu can be automatically generated. This means that no further configuration is required and the monitoring will start immediately. All that is required is SNMP access to the device.
Mutiny is also capable of detecting the Layer 3 and Layer 3 topology of the network or sub-network and creating automatically or manually creating a root-cause tree that can be used to quickly identify service failures and minimise Alerts.
When identifying problems, Mutiny use Transient-Event Suppression to minimise the number of false alarms and ensure that Alerts are quickly sent to the operators and contact who really need to see them.
Mutiny readily works as a Virtual Appliance under VMware. It is therefore very fast to deploy and make resilient by simply cloning the virtual machine each time a new instance is required.
Mutiny is licensed on a per monitored-device basis, regardless of the number of monitored properties on that device. This method of licensing is much simpler for planning and resourcing than a per-property based method.
Mutiny offers a variety of flexible licensing scheme. For example for managed-service providers, Mutiny operates an audit-based system whereby the customer only pay for the Mutiny licensing that are actually in use. This allows the provider to dynamically grow and scale their customer base without worrying about pre-purchasing a number of licences that they may or may not require.