The pros and cons of tower, rack, and blade servers
    There are  three main choices when it comes to buying a new server: tower, rack, or blade.  Here are some of the pros and cons about each kind of server, as well as some  of my experiences with each one.
    Tower servers
    Tower  servers seem dated and look more like desktops than servers, but these servers  can pack a punch. In general, if you have a lot of servers, you're probably not  using a bunch of tower servers, because they can take up a lot of space and are  tough to physically manage since you can't easily stack them on one another. In  some cases as organizations grow and move to rack servers, conversion kits can  be purchased to turn a tower server into a rack-mount server.
    As  implied, tower servers are probably found more often in smaller environments  than anywhere else, although you might find them in point solutions in larger  places.
    Tower  servers are generally on the lower end price-wise, although they can expand  pretty decently and become really expensive.
    Tower  servers take up a lot of space and require individual monitors, keyboards, and  mice or a keyboard, video, mouse (KVM) switch that allows them to be managed  with a single set of equipment. In addition, cabling can be no fun, especially  if you have a lot of network adapters and other I/O needs. You'll have cables  everywhere.
    Rack servers
    If you  run a data center of any reasonable size, you've probably used a lot of  industry standard 19" wide rack servers. Sized in Us (which is a single  1.75" rack unit), rack servers can range from 1U "pizza boxes"  to 5U, 8U, and more. In general, the bigger the server, the more expansion  opportunities are available.
    Rack  servers are extremely common and make their home inside these racks along with  other critical data center equipment such as backup batteries, switches, and  storage arrays. Rack servers make it easy to keep things neat and orderly since  most racks include cable management of some kind. However, rack servers don't  really simplify the cabling morass since you still need a lot of cabling to  make everything work -- it's just neater. I once worked in a data center in  which I had to deploy 42 2U Dell servers into three racks. Each server had to  have dual power cables, keyboard, video, and mouse cables and six (yes,  six) network cables (six colors with each color denoting a specific network).  It was a tough task to keep the cabling under control, to put it mildly.  Because everything was racked, there was built-in cable management that made  this easier.
    Like  tower servers, rack servers often need KVM capability in order to be managed,  although some organizations simply push a monitor cart around and connect to video  and USB ports on the front of the server so that they don't need to worry about  KVM.
    Rack  servers are very expandable; some include 12 or more disks right in the chassis  and support for four or more processors, each with multiple cores. In addition,  many rack servers support large amounts of RAM, so these devices can be  computing powerhouses.
    Blade servers
    There was  a day when buying individual blade servers meant trading expansion  possibilities for compactness. Although this is still true to some extent,  today's blade servers pack quite a wallop. There is still some truth to the  fact that blade servers have expansion challenges when compared to the tower  and rack-based options. For example, most tower servers have pretty significant  expansion options when it comes to PCI/PCI Express slots and more disk drives.  Many blade servers are limited to two to four internal hard drives, although  organizations that use blade servers are likely to have shared storage of some  kind backing the blade system.
    Further,  when it comes to I/O expansion options, blade servers are a bit limited by  their lack of expansion slots. Some blade servers boast PCI or PCI Express  expansion slots, but for most blade servers, expansion is achieved through the  use of specially designed expansion cards. In my case, the Dell M600 and M610  blades have three mezzanines. The first mezzanine consists of dual Gigabit  Ethernet adapters. The remaining mezzanines are populated based on  organizational need. In my case, our blades have a second set of Gigabit  Ethernet adapters housed in the second mezzanine and Fibre Channel adapters in  the third. If necessary, I could also choose to use mezzanine cards with four  ports in some configurations. So, although the blade server doesn't have quite  the I/O selection of other server form factors, it's no slouch, either.
    When raw  computing power and server density is the key drive, blade servers meet the  need. For example, in my environment, I have a 10U Dell M1000e blade chassis that can support up to 16  servers. So, each server uses the equivalent of 0.625U of rack space. On top of  that, the blade chassis holds four gigabit Ethernet switches and two Fibre  Channel switches, so there is additional rack space savings since I don't need  to rack mount these devices to support different connectivity options. In  addition, the blade chassis has a built-in KVM switch so I don't need to buy a  third party and cable it up.
    Speaking  of cabling, a blade environment generally has much less of it than tower or  rack environments since a lot of the connectivity is handled internally. You'll  end up with a neater server room as a result.
    Another  point is adding a new server consists of simply sliding it into an available  slot in the chassis. There is no need to rack a new server and deal with a  bunch of new cabling. This small size makes heat dissipation a challenge. Blade  chassis can put out a lot of heat.
    From a  cost perspective, blade servers require some initial infrastructure, such as  the chassis, so the upfront cost is often higher than for servers of other  types.
    Bottom line
    If you  need one or two servers, a tower solution probably makes sense. If you need  three to 24 servers or massive scalability, then rack servers are for you. When  you go need more than 24 servers, I advise you to consider a blade solution to  meet your data center needs.
     
Thanks - Sameer Naik | sameer.naik@live.com