Standalone Hyper-V Server Part 1 - The Hardware

I'm finally getting around to my first post! This is a simple one to kick off a series of posts for setting up a Hyper-V server in a standalone configuration with an onboard raid controller. We will get into some pretty fun stuff later (think NIC teaming and virtual switches, optimal storage allocation units, and some fun powershell), but for now we'll look at a piece by piece hardware breakdown of a server I have actually deployed, and my selection criteria for each item. Let's get started...

First, list of the hardware (and OS) for a basic standalone Hyper-V server. This server has a capable pair of CPUs, enough RAM to run a good number of VMs, fairly fast hard drives with a lot of storage, and some fiber NICs that are phenomenal for the price.  The main sacrifices here were the HDDs and NICs since there are no SSDs and the NICs aren't RDMA capable. This was obviously selected for the needs of the company it was placed in, this is right for everyone!

Description Model
Chassis R2308WTTYS
CPU E5-2650 v3
RAM KVR21R15D4K4/64
RAID Controller RMS3CC080 
RAID Controller Battery AXXRMFBU5
HDD - Storage 0F23650 
S2012 OS P71-07835

Now let's break down what I do and don't like about each part for a Hyper-V server.

R2308WTTYS Intel Chassis: This is a great base chassis to use if you are like me and don't want to use a branded server. It supports the Xeon E5 v3 series of processors, uses DDR4 RAM, has 10Gb ethernet onboard NICs (if you have 10Gbase-T switches, these are NOT fiber), and has a lot of room for PCI Express cards with multiple risers. This has 3.5" drive bays which is great if you want a medium performance storage array with a lot of space. If you want more speed, they have a 2.5" version, but be prepared for a big cost increase in $/GB with the drives. I wish it came with the secondary PSU, but I guess you have to please the crazy people that will run a server without redundant power supplies.

AXXPRAIL Intel Rails: Sure are some rails. They retract and stuff... 

FXX1100PCRPS PSU: This is the required PSU to have redundant PSUs with the chassis. Yes, it's important. No, I won't help you recover your data.

E5-2650 v3 CPU: CPUs are not a fun bottleneck to run into. Two of these gives you 20 cores with 40 threads which should be enough to run several 10Gb NICs and a bunch of VMs. That being said, this is definitely one item you want to select based on your needs. Sounds like a future blog post, hmm....

KVR21R15D4K4/64 RAM: Hypervisors need RAM, lots and lots of it. Even 256GB will get chewed up pretty fast depending on your load. Fortunately, this build has room to ad another 8 sticks still if you need more later.

RMS3CC080 & AXXRMFBU5 RAID Card & Battery: I list them together for a reason, don't get one without the other or you are risking data loss if your server loses power. That being said, this is a Intel RAID card that attaches directly to the motherboard, and does not take a PCI slot. That coupled with an LSI chip and guaranteed compatibility with this motherboard makes it a winner in my book.

WD5003ABYZ HDDs:These are just for an OS RAID1. You could easily run this server off of a flash drive or a single drive. The biggest benefit of running it off of a flash drive is that it frees up two more HDD bays for storage drives, and is probably what I would do for a clustered Hyper-V server with shared storage. However, for a standalone server, I prefer the RAID1.

Hitachi 0F23650 HDDs: 6TB apiece and SAS controllers make these nearline SAS drives attractive, but it is the 4kn disk format that it presents itself as that really makes these things shine! This, coupled with 4KB sector size can really help your storage performance with some workloads, Hyper-V VHDX file storage being one of them. This also removes some issues you will have in a hot/cold tier setup with SSDs since SSDs are almost exclusively 4kn. Also comes in a 8TB model.

E10G42BFSRBLK Intel Fiber NIC: A pretty economical way to do 10Gb fiber. This card has two interfaces, supports 64VMQ or SR-IOV queues per interface (we'll ge to this later), comes with the SFP+ modules, and is backed with Intel reliability.

P71-07835 2012R2 DataCenter Open Business License: If you are going to have a lot of small VMs, it makes sense to go with the datacenter licensing. If your are just virtualizing a couple of large database VMs or something, you might look into Server 2012R2 Standard virtualization licensing.

Phew, that was awfully long winded for some server hardware, but there it is. Stay tuned for more posts about the actual configuration.

Project Tags: