LSI MegaRAID SAS 3108 – Cisco 12G SAS Raid – VSAN JBOD

The other day I decided to switch out my disk that I was using for VSAN caching. I was doing some testing with an NVME, but now had to back it down to a SAS SSD. The disk I used had been used previously in a different system, so it had a foreign configuration that I had to remove.

  1. Best practice would be to work on one host at a time. Put the first host in maintenance mode, and choose to “Ensure Availability”.
  2. After the host is in maintenance mode, click on your cluster then click “Configure” tab, and then click “Disk Management“.
  3. Click on the disk group that you want to remove and then click the “Remove the disk group” button.
  4. You will get another data migration question. I choose “Ensure data accessibility from other hosts“. Click “Yes“.
  5. Wait for the disk group to be removed from the host. When complete, reboot your host. When prompted during the boot process, press “Ctrl-R” to get to the raid configuration menu.
  6. Press “Ctrl-P” or “Ctrl-N” to switch pages. One of the pages should show your disks and the slots they are in. We have a problem here. The only option for my 400GB SSD is to erase the disk because it has the state of “Foreign“.
  7. Switch pages to the “Virtual Drive Management” page and then on the Cisco 12G SAS Modular Raid press “F2“. This will give a menu; select “Foreign Config” and then “Clear“.
  8. This will clear out your configuration so please make sure that you have thought things through. If you are OK with the possibility of data loss, click “Yes“.
  9. Now we are getting somewhere. The disk now shows UG (Unconfigured Good).
  10. Highlight the disk and then press “F2“. From the menu click “Make JBOD“.
  11. DATA ON DISKS WILL BE DELETED so make sure you want to do this. Click “Yes” to proceed.
  12. All looks good. Escape out and exit the application. Reboot your host when done.

  13. My new 400GB disk shows up in VMware now.

    c
  14. Now click on your cluster, then click the “Configure” tab and then click “Disk Management“. Click on the host that you removed the disk from earlier and then choose “Add disk group” button. Choose your cache disk and your capacity disks and you are ready to go. Take your host out of maintenance mode and repeat steps on each host.

VSAN on Cisco C240-M3 with LSI MegaRAID SAS 9271-i

In the past I have configured a LSI MegaRAID SAS 3108 – Cisco 12G SAS Raid controller with 1GB FBWC module. When I set that up, I just passed through control to VMware. The MegaRAID SAS 9271-I is different; here is how I set them up. I used VMware KB2111266 for reference on configuration settings.

When booting and the controller information comes up, press CTRL-H.

  1. Click “Start”.
  2. I already have Virtual Drive 0 configured for my ESXi OS. Virtual Drive 1 has my 400GB disk I am using for VSAN caching. I have four unconfigured disks that I want to use for my capacity tier.
    Click “Configuration Wizard”.
  3. Click “Add Configuration” radio button and then click “Next“.
  4. Click “Manual Configuration” radio button and then click “Next“.
  5. Now we see the four unconfigured drives on the left side. Click the first one, then click “Add to Array“.
  6. Click on the “Accept DG” button. Repeat Steps 5 and 6 until all of your disks are in their own disk group then click “Next“.
  7. In the left pane click the “Add to SPAN” button.
  8. The Disk Group appears in the right window under Span. Click “Next“.
  9. Depending on if the disk is an HDD or SSD, your settings will change. In my example I configured for HDD. When finished changing settings, click “Accept” and then “Next“.

  10. You will receive and alert about the possibility of slower performance with Write Through. Click “Yes“.
  11. You now have to click “Back” and repeat steps 7-10 until all of your drives have been added.
  12. One all of your drives have been added, click “Accept“.
  13. Click “Yes” to save the configuration.
  14. Acknowledge that you know that data will be lost on the new virtual drives. Click “Yes”.
  15. You will now see all of your drives under Virtual Drives. Click the “Home” button.
  16. Click “Exit”.
  17. Click “Yes”.
  18. Power cycle your server.
  19. Success!! Vcenter shows my drives under storage devices. I can now add these disks to VSAN.

 

Storage Policy Based Management Wins

Hey everyone, it’s my first VSAN post! For the past few months I have been building out VSAN on a few test environments. One of them is what I call a Frankenstein cluster. It consists of four Cisco C series hosts with four SSD for Caching and 16 SSD for Capacity set up for an all flash VSAN. In order to do depupe and compression you have to have all flash. I am not going into performance discussions right now, but instead want to talk about Storage Policy Based Management or SPBM. Last week I had someone ask me where to set disk to Thick/Thin within the web client. Notice that Type says “As defined in the VM storage Policy”.

Here is the default policy assigned to this VM. Notice that there is nothing in my rules that would define thick/thin or anything between.

I bet you are now thinking, “What does the Fat Client say”. Well, I am glad you asked. I have a VM on the VSAN datastore with a 40GB Thick Provision Eager Zeroed HD1.

What does that look like for storage usage? I will show both from fat and web client. I think there is a bug in the web client that I will discuss shortly. I bet you are asking why it is showing 80GB Used storage. Remember my storage policy? It is set for raid 1 which will mirror the data. Keep that in mind if you will be using raid 1 with VSAN. Number all seem to jive.


Hey, let’s add a second disk. Let’s make it 100GB Thick Eager Zero. After adding the disk I went into windows and onlined/initialized the disk. After creating the disk in windows…these were the results. This is where I think there is a VMware bug. If you look at the second image, the storage usage in the web client at the VM level never changes. Comparing some numbers. Before the Used total was 591.73GB and this increased to 794.68GB used. This is a change of 203GB. Change in free space went from 6.04TB to 5.84TB which is a change of 200GB. Number looks like what we would expect.

Now let’s have some fun! Time to change the Default VSAN Storage Policy. Go to Homeà VM Storage Policies. Highlight the policy you want to change and then click the little pencil icon to edit. Click the Add Rule drop down at the bottom and choose “Object space reservation (%)”. I chose the default of 0. This means that any disks that have the default storage policy assigned to them will essentially be thin provisioned. Space will not be consumed until something is actually added to the drive. I chose to apply to all VMs now (I only have one). This might take some time if you have a lot of VMs that will change.


You should now be back at the Storage Policy Screen. I want to make sure that policy applied. I clicked on my default storage policy. Once in the policy, I clicked the “Check Compliance” button. On the right side I see “Compliant 1”. Just to make sure this applied to all disks on the VM (you can have separate storage policies apply to different disks) I went back to my VM à Manage à Policies. Notice all disks are compliant.


What does this all mean for my space? Let’s break it down. Used total was 794.68 and now is 514.68. This is a change of exactly 280GB! Free space went from 5.84TB free to 6.11TB. This is a change of 270GB. Look at the used space in the datastore. Notice also that the VM now shows provisioned storage of 144GB and used of 36.43GB.


Now for the interesting part. Let’s look at the fat client. Notice that the disks still show that they are Thick Provision Eager Zeroed, but because of the storage policy, they really are not.

I conclusion the storage policy wins…even though the Fat Client doesn’t seem to know that. Please let me know if you have any questions or want me to test anything else with this scenario.

Creating Cisco UCS port channels and then assigning a VLAN

In my new position I am learning a lot about UCS. Today I had to create a port channel for both A and B fabrics and then assign a VLAN to both.

  1. Log into Cisco Manager.
  2. Click on the LAN tab and then Plus sign next to Fabric A.
  3. Right click Port-Channels and select Create Port Channel.
  4. Give the port channel an ID and a Name.
  5. Select which ports are going to be used in this port channel group. Make sure you hit the >> button to move them over to be in the port channel. Click Finish.
  6. Repeat steps 1-5 on Fabric B
  7. Click LAN in the left navigation window and then at the bottom click on LAN Uplinks Manager.
  8. Click the VLAN tab and then click the VLAN Manager tab. In the left navigation pane you will select the port channel group that you created earlier. In my case I am using port channel 23. In the right window you select which vlans you want to check the box of the vlans you want to be a part of this port group. Click the Add to VLAN/VLAN Group box at the bottom of the screen.

    That should be it. You have now created a port channel and assigned a VLAN to it!

VMK0 MAC Change

I ran into an issue the other day in a UCS blade system where all of the VMK0 had the same MAC addresses. See VMware KB https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1031111

I had to remove the vmk0 port group (removes connectivity) and then I had to add this back through a KVM connection to the host. This is how I did it. Make sure you save the IP Information before making any changes.

VMK0 MAC Before

  1. Open ILO or KVM session to your host. Under troubleshooting choose the “Enable ESXi Shell” option. Once enabled do an alt-F1 on the keyboard and you should be at a login prompt. Login with root and your root password.
  2. I am going to use my port group name of “ESXi_MGMT” as an example. Yours might be different. Type esxcfg-vmknic -d -p ESXi_MGMT . This will remove vmk0.
  3. Type esxcfg-vmknic -a -I <management IP> -n <netmask> -p ESXi_MGMT. This will add the vmkernel back.

VMK0 MAC After Change

One important thing to note. Notice that Management traffic is no longer enabled on my vmk0 connection. You must edit this connection and check the box for management traffic. VMware will automatically move management to another vmk so make sure you go through and remove it from that vmkernel.

Storage vMotion Folder Rename

keep-calm-and-think-work-smarter-not-harder

I ran into a project the other day where the VM names in vCenter did not match up with the Windows hostname of the VM.  The VMware administrator was fixing this by shutting down the orignal machine and then cloning it.  The problem is that he then would have to do some configuration, and then on the datastore, the name would still be wrong because it appends a _1 behind the name because that VM name already exists.  The easiest way to change the name on your datastore is to do a Storage vMotion.  In my example I created a VM named “Original” and then change the name to “NewName”.  I will show what happened along the way.

What does the original datastore look like?


Now I renamed the VM from “Original” to “NewName”.  Notice that on the datastore the folder and files still use the “Original” name.


Time to do a Storage vMotion.  For my example I am using a powered off VM for simplicity.  The full C# VIC will not let you do a live Storage vMotion of a VM, however, you can use the Web VIC to accomplish this.   


Now the “NewName” VM is on a new host.  Looking at the Datastore, we see both the folder and file names have all been changed.

Enabling EVC (Enhanced vMotion Compatibility)

It has been a while since I have had to enable EVC, but I needed to the other day in the office.  I created a cluster with a HP DL380 G7 and an older HP DL380 G5.  When I tried to turn EVC on for the cluster, I ran into this error.  “The host cannot be admitted to the cluster’s current Enhanced vMotion Compatibility mode.  Powered-on or suspended virtual machines on the host may be using CPU features hidden by that mode.”  This message is telling you that the current machines that are powered on are using the technology from the newer processor and in order to turn on EVC for the cluster, the VMs need to be powered off.  So…I powered off all of my VMs on the DL380 G7 (newer host).EVC1

After all VMs are powered down, right click on the cluster and select Edit Settings.
evc2

Click the VMware EVC on the left pane and the click Change EVC Mode… button.EVC3

I have Intel processors so I selected Enable EVC for Intel Hosts.  Now I get a green check under the Compatibility pane.  Looking good!EVC4

Now depending on the processor generation, you have to change the EVC Mode.  For mine, I chose the Intel “Penryn” Generation” and I still had a green check box.  If your hosts don’t support the EVC mode, it will let you know in the Compatibility pane.  The processor support documentation can be found here.EVC5

We now see that Intel “Penryn” Generation is my EVC mode.   The only thing left to do is power on the VMs and start your migrations!EVC6