Home > datacenter, Hardware refresh, UCS > Week Two of Cisco UCS Implementaion Completed

Week Two of Cisco UCS Implementaion Completed

Progress has been made!!

The first few days of the week involved a number of calls back to TAC, the UCS business unit, and various other Cisco resources without much progress.  Then on Thursday I pressed the magic button and all the sudden our fabric interconnects came alive in Fabric Manager (MDS control software).  What did I do? I turned on SNMP.  No one noticed that it was turned off (default state).    Pretty sad actually given the number of people involved in troubleshooting this.

This paragraph subject to change based on confirmation of accuracy from Cisco. So here’s the basic gist of what was going on.  We are running an older version of MDS firmware and the version of Fabric Manager that comes with this firmware is not really “UCS aware”.  It needs a method of communicating with the fabric interconnects to fully see all the WWNs.  The workaround is to use SNMP.   I created an SNMP user in UCS and our storage admin created the same username/password in Fabric Manager.  Of course having the accounts created does nothing if the protocol they need to use is not active.  Duh.

The screenshot below shows the button I am talking about.  The reason no one noticed that SNMP was turned off was because I was able to add traps and users without any warnings about SNMP not being active.  Also, take a look at the HTTP and HTTPS services listed above SNMP.  They are enabled by default.  Easy to miss.

.

.

With storage now presented, we were able to complete some basic testing.  I must say that UCS is pretty resilient if you have cabled all your equipment wisely.  We pulled power plugs, fibre to Ethernet, fibre to storage,  etc and only a few did times did we lose a ping (singular PING!).   All our data transfers kept transferring, pings kept pinging, RDP sessions stayed RDP’ing.

We did learn something interesting in regards to the Palo card and VMware.  If you are using the basic Menlo card (standard CNA), then failover works as expected.  Palo is different.  Suffice it to say that for every vNIC you think you need, add another one.  In other words, you will need two vNICS per vSwitch. When creating vNICs, be sure to balance them across both fabrics and note down the MAC addresses.  Then when you are creating your vSwitches (or DVS) in VMware, apply two vNICs to each switch using one from fabric A and one from fabric B.  This provides the failover capabilities.    I can’t provide all the details because I don’t know them, but it was explained to me by one of the UCS developers that this is a difference in UCS hardware (Menlo vs Palo).

Next up: testing, testing, and more testing with some VLANing thrown in to help us connect up to two disjointed L2 networks.

.

About these ads
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: