A few years ago, a new project dropped into my lap at work. This project was to involve the setup and configuration of a “next-generation” linux server for our Genetics department to run their sequencing analyses. The next-gen server was to replace an aging 1U Dell server with a pitiful single P4 processor and 4Gb RAM (the poor thing ran maxed out pretty much 24×7). So far, so good.
And then the IT manager got involved and decided that the money for the next-gen server should be invested in a decent VMWare environment, with the Genetics department getting a virtual linux server to use instead. And this is where the fun begins. To begin with, I had been getting quotes for a server with between 48 and 128Gb RAM. Our new virtual system has 32Gb, total. Given that the Genetics server will be one of many virtual hosts running on this system, 32Gb is likely to be insufficient. This is not the biggest issue however.
Our IT manager then decided to pay for consultants to come out and set up the system. This seemed logical; while we have some experience with ESXi having run a small ESXi environment for a year or so, we don’t have experience with complex VMWare setups with multiple blades and SANs.
Unfortunately, the IT manager also ordered two new switches that the SAN and blade chassis would connect to. Equallly unfortunately, he didn’t consult his infrastructure engineer (that’d be me) and so we ended up with cheap switches that don’t support the features necessary for a fully fault tolerant setup. The consultants weren’t too fazed by this. After a bit of head scratching, they set up a semi-working system and left. Read the rest of this entry »