Originally Posted by
Slicker
The same people who determine max bandwidth an ISP needs are the same people who decide how many bathroom stalls or urinals are needed in sports stadiums. They look at the average utilization rather than the peak.
At a previous employer (e.g. a publishing company who's name rhymes with the one that the show "The Office" used) there was a director of systems administration who decided we didn't need new servers because they were only running at 3%. From 9:30 to 4:00 on Monday thru Friday they all ran at 100% but the late afternoon, evening, night, early morning and weekends, they ran at <1% so the average was 3%. So instead of buying more or faster servers, the director decided to turn all the existing servers into virtual machines and get rid of 2/3 of them since they had such low utilization. They then became IO and network bound because there weren't separate drives or NICs for each VM so according to the stats, they remained at 3$ so it was a good decision. To save even more money they outsourced the data center so we went from a gigabit switched environment to where they had 350 people sharing a single T3 line (45 Mbps) for all LAN access (not WAN, LAN!!!). That became the biggest bottleneck so the need for faster servers became a moot point proving that the 3% utilization was correct and that the change to an outsourced data center had no impact on the business other than to save money. It was so frustrating working there that I quit. I've been working for myself ever since without regrets.