Chopping your datacenter to size with new blades

Saar here, resident engineer at Myriad Supply. Today I’ll be discussing blade enclosures.

Let’s start with what some of you may already know- A chassis. In the networking world, you have a switch with ports in the front:

 

A switch also has power supplies and fans.

 

 

 

Let’s say each switch has:

  • 24-48 ports
  • 1 power supply
  • 1 fan (or more)
  • 1 management port
  • 1 console port

Now, if you buy 12 switches, you will have:

  • 12 switches
  • 12 power supplies (or 24 if you have redundant power)
  • 12 fans
  • 12 management ports
  • 12 console ports

A chassis offers you the option of setting up a central fan, central power supply, and central control management.

 

This can save you:

  • 12 console ports which would have needed 12 console cables
  • 12 management ports which would have needed 12 management cables and a management switch

Instead of 24 port supplies, you can have 2-8, depending on the chassis. Instead of 24 fans, you can have 1-2 central ones. All of this equates to a lot of savings in parts. To name a few:

  • Power
  • Ease of maintenance
  • TCO (total cost of ownership)
  • Cabling
  • Airflow Design

This is a Dell Blade Enclosure:

You can put up to 32¼ height servers in a 10 RU enclosure. This means you can save money, as instead of a maximum of 42 servers on the rock, you can put 4 enclosures each with 32 server = 128 servers.

You can also share power, cooling, network, kvm, idrac. If you look at the back of each enclosure:

You can see for example, 6 power supplies being shared among 32 servers and 9 fan trays being shared among 32 servers. 2 CMCs are being shared. To summarize, a blade server enclosure allows you to save money the same way a chassis does.

  1. It’s easier to use, as you pull a server out and put a server in.
  2. They share items.
  3. Less power
  4. Less Cabling
  5. Better airflow
  6. Generally lower TCO

 

Now let’s have a look at blade servers. A Cisco 6506 has 6 slots, a 6509 has 9 slots, and a 6513, 13 slots. Dell blade servers have one module, the M1000E:

It has room for 32 ¼ height servers, 16 half height servers, and 8 full height servers. The chassis comes EMPTY. To power it up you need power supplies.

Minimum is 3, maximum is 6.

To cool it, you will need fans. The fans are included with the chassis. To connect to the chassis using a management so you can get started, you need a CMC. CMC is Chassis MANAGEMENT Card. The chassis comes with one, and you can order a second one.

Notice the CMC.

To manage the servers themselves that are in the chassis, you need an iKVM, which is included with the chassis.

So, we have a working chassis. The next thing is to figure out which servers you want in the chassis: ¼ height, ½ height, or full height.

The second question is which level of server.

full height  M820 is the newest G12    M910 and M915 are older G11 models

  • ¼ height: You have one choice – you can only get that full height housing that house 4* ¼ blades.
  • ½ height: You can pick the economy model:  M520 or the high-end M620.  The key difference is that the M520 will have maximum of 192GB DRAM while the M620 will have 768GB DRAM
  • full height:  M820 is the newest G12.  M910 and M915 are older G11 models

 
Once you’ve picked your blades, you need memory in them. Once you figure out how much memory you need, each can have 0, 1, or 2 hard drives.

The last thing is which network card. Here’s a quick cheat sheet:

So now you have a chassis, picked servers, and populated the servers with memory. Now the servers need to communicate with the network. Behind the server, you have mini switches:

These switches have ports like a regular switch:

So let’s review. If server 1 wants to communicate with server 2, they use the back plane of the chassis. If server 1 wants to communicate with the internet/network, it uses the network switch in the back.

The network switches in the back are pretty complicated. So you have to figure out if you want Ethernet 1G, 10G, or 40G, Fibre, Infiniband, Stacking, or FCOE.

You should be saving LOADS of money in a big datacenter by using blade enclosures instead of single pizza trays or servers.

P.S. This year, Dell is offering the option of adding storage directly to the blade enclosure. So instead of buying a separate storage, you simply buy a special storage that can be inserted directly into the blade enclosure. This is called M4110.

___________________________________________________________________________________________________________________________________________________________________

Saar Harel is a resident Engineer at Myriad Supply, and has been in the Networking Field for over 20 years. You can check out his Google+ and ask him questions!