An important part of maintaining a robust IT environment for any business is monitoring and maintaining suitable capacity.
As time goes by and the needs of your business grow, this puts added strain on the IT infrastructure.
No matter how deep your pockets are it is not always practical to build an infrastructure up front that is robust enough for all your future needs, so it is important from time to time to evaluate the IT Infrastructure to ensure there is sufficient capacity for immediate and future needs.
IT capacity for servers, PCs, devices and infrastructure as a whole may fall under the following categories.
- Network/Internet bandwidth
- Hard Disk space
- Memory size
- CPU speed/loading
- Power supply
- Number of support staff and availability
- Support staff training
Some of these items may not seem immediately obvious, such as ensuring there is sufficient disk space to hold unattended server OS updates, or that the CPU in the firewall can handle the additional volume of packets generated by a recently implemented IP phone system.
What about the power usage of all the equipment?
As more equipment is added, did anyone calculate and check that there is sufficient power and cooling in the building to cope?
Many critically important functions of an IT implementation are often overlooked, and when neglected almost certainly results in unexpected and catastrophic downtime. A sudden event such as loss of power can severely impact multiple areas of a business simultaneously, and often strikes “out of the blue”
So rather than sitting on a time bomb, insist your IT team provides a periodic Capacity Planning report.
In June of 2015 a large prominent shared Data Center in New Zealand operated by a mobile phone provider experienced a main power supply disruption. At the start of business one Monday morning a drop in voltage (brownout) caused a main overcurrent breaker to trip. Such an event should ordinarily trigger the backup generators to auto-start, but due to lack of foresight in the design of the system the generators did not start.
The Data Center staff on-site had not been trained how to manually start the generators and as a result by the time the electrical engineers arrived 30 minutes later to start the generators, many critical customer servers were offline unless they had a backup at another location.
It took another hour to get the power fully restored, during which time many businesses where unable to function, some being crippled for the entire day awaiting manual intervention. In one case a medical related company was affected.
This was entirely due to lack of progressive Capacity Planning, which was resolved eventually by adding an additional supply cable and upgrading the over current protection device.