Archive

Posts Tagged ‘HP’

Some Photos of the Cisco C-210 M2

August 25, 2010 1 comment

I posted yesterday some thoughts on the C-210.  Here are a few photos to help visualize what I was referring to.

This first photo is of the C-210 internals.  Notice the size of the fans and the wasted space between the fans and the motherboard.  Click on the photos for larger images.

Photo of Cisco C-210 M2 server insides

.

This next photo attempts to show the depth differences between an HP DL380 and a C-210.  It also shows how long the entire server/cable arm combo is.   It’s a bit hard to tell from the photo, but the HP cable arm is just a hair longer than the cable mgmt tray on the right.  The C-210’s cable arms sticks out past the cable tray a few inches.

C-Series and HP cable arms in use.

This last photo is a closeup (sort of) of the C-210 cable arm.  Ignore the purple cables.

Larger image of C-210 cable arm in use

Categories: cisco Tags: ,

A Major Milestone Has Been Reached!!

August 24, 2010 Leave a comment

We did it, and we did it early.  We completed the move of our existing VMware infrastructure onto the Cisco UCS platform.    At the same time, we also moved from ESX 3.5 to vSphere.  All-in-all, everything is pretty much working.  The only outstanding issue we haven’t resolved yet involves Microsoft NLB and our Exchange CAS/HUB/OWA servers.  NLB just doesn’t want to play nice and we don’t know if the issue is related more to vSphere, UCS, or something else entirely different.

Next up: SQL Server clusters, P2Vs, and other bare metal workloads.

SQL Server migrations have already started and are going well.  We have a few more clusters to build and that should be that for SQL.

P2Vs present a small challenge.  A minor annoyance that we will have to live with is an issue with VMware Converter.  Specifically, we’ve run into a problem with resizing disks during the P2V process.  The process fails about 2% into the conversion with an “Unknown Error”.  It seems a number of people have also run into this problem and the workaround provided by VMware in KB1004588 (and others) is to P2V as-is and then run the guest through Converter again to resize the disks.  This is going to cause us some scheduling headaches, but we’ll get through it.   Without knowing the cause, I can’t narrow it down to being vSphere or UCS related.  All I can say is that it does not happen when I P2V to my ESX 3.5 hosts.  Alas, they are HP servers.

.

We’ve gone all-in with Cisco and purchased a number of the C-Series servers, recently deploying a few C-210 M2 servers to get our feet wet.  Interesting design choices to say the least.  I will say that they are not bad, but they are not great either.   My gold standard is the HP DL380 server line and as compared to the DL380, the C-210 needs a bit more work.  For starters, the default drive controller is SATA, not SAS.  I’m sorry, but I have a hard time feeling comfortable with SATA drives deployed in servers.  SAS drives typically come with a 3yr warranty; SATA drives typically have a 1yr warranty.  For some drive manufacturers, this stems from the fact that their SAS drives are designed for 24/7/365 use, but their SATA drives are not.

Hot Plug fans?  Nope..These guys are hard-wired, and big.   Overall length of the server is a bit of a stretch too, literally.   We use the extended width/depth HP server cabinets and these servers just fit.   I think the length issue stems from the size of the fans (they are big and deep) and some dead space in the case.  The cable arm also sticks out a bit more than I expected.  With a few design modifications, the C-210 M2 could shrink three, maybe four inches in length.

I’ll post some updates as we get more experience with the C-Series.

Battle of the Blades – Part III

April 27, 2010 Leave a comment

So earlier on I posted that I would list our strategic initiatives and how they led us down the path of choosing Cisco UCS as our new server platform as opposed to HP or IBM blades.  Before I begin, let me state that all three vendors have good, reliable equipment and that all three will meet our needs to some degree.  Another item of note is that some of our strategies/strategic direction may really be tactical in nature.  We just sort of lumped both together as one item in our decision matrix (really a fancy spreadsheet).  The last item to note is that all facts and figures (numbers) are based on our proposed configurations (vendor provided) so don’t get hung up on all the specifics.  With that out of the way, let’s begin…

If we go way back, our initial plan was just to purchase more HP rack-mount servers.  I have to say that DL380 server is amazing.  Rock solid.  But given our change in strategic direction, which was to move from rack-mounts to blades, we were given the option of going “pie-in-the-sky” and develop a wish list.  It’s this wish list, plus some specific initiatives,  that started us down the path at looking at Cisco UCS (hereafter referred to as UCS).

Item one:  Cabling.  Now all blade systems have the potential to reduce the number of cables needed when compared to rack-mount systems.  Overall, UCS requires the least amount of cables outside of the equipment rack due to the fact that the only cables to leave the rack are the uplinks from the fabric interconnect.  With HP and IBM, each chassis is cabled back to your switch of choice.  That’s roughly 16 cables per chassis leaving the rack.  With UCS, we have a TOTAL of 16 cables leaving the rack.  Now you might say that a difference of 32 cables per rack (assume 3 HP or IBM chassis in a rack) might not be much, but for us it is.  Cable management is a nightmare for us.  Not because we are bad at it, we just don’t like doing it so less cabling is a plus for us.  We could mitigate the cable issue by adding top of rack switches (which what a fabric interconnect sort of is), but we would need a lot more of them and they would add more management points, which leads us to item two.

Item two:  Number of management points.  Unless you have some really stringent, bizarre, outlandish requirements the chances are that you will spend your time managing the UCS system at the fabric interconnect level.  I know we will.  If we went with HP or IBM, we would have to manage down to the chassis level and then some.  Not only is each chassis managed separately, think of all the networking/storage gear that installed into each chassis.  Each of those is a separate item to manage.  Great, let’s just add in X more network switches and X more SAN switches that need to be managed, updated, secured, audited, etc.  Not the best way to make friends with other operational teams.

Item 3:  Complexity.  This was a major item for us.  Our goal is to simplify where possible. We had a lot of back&forth getting a VALID configuration for HP and IBM blade systems.  This was primarily the fault of the very large VAR representing both HP and IBM.  We would receive a config, question it, get a white-paper from VAR in rebuttal, point VAR to same white-paper showing that we were correct, and then finally get a corrected config.  If the “experts” were having trouble configuring the systems, what could we look forward to as “non-experts”?

Talking specifically about HP,  let’s add in HP SIM as the management tool.  As our HP rep is fond of stating, he has thousands of references that use SIM.  Of course he does, it’s free!   We use it too because we can’t afford Openview or Tivoli.  And for basic monitoring functions, it works fine albeit with a few quirks.  Add in Blade System Matrix on top of it, and you have a fairly complex management tool set.  We spent a few hours in a demo of the Matrix in which the demoer, who does this every day, had trouble showing certain basic tasks.  The demoer had to do the old tech standby: click around until you find what you are looking for.

Item 4: Multi-tenancy.  We plan on becoming a service provider, of sorts.  If you read my brief bio, you would remember that I work for a municipal government.  We want to enter into various relationships with other municipalities and school districts in which we will host their hardware, apps, DR, etc and vice-versa.   So we need a system that easily handles multiple organizations in the management tool set.   Since we are an HP shop, we gave a very strong looksy to how HP SIM would handle this.  It’s not pretty.  Add in the Matrix software and it’s even uglier.  Now don’t get me wrong.  HP’s product offerings can do what they claim, but it not drag-and-drop to setup for multi-tenancy.

Item 5: Converged Architecture. When we made our initial decision to go with UCS, it was the only converged architecture in town.  I know we are not going to be totally converged end-to-end for a few years, but UCS gets us moving in the right direction starting with item 1: cabling.    All the other vendors seemed to think it was the wrong way to go and once they saw the interest out there, they decided to change direction and move toward convergence too.

Item 6: Abstraction:  You could also call this identity, configuration, or in UCS parlance, service profiles.  We really like the idea of a blade just being a compute node and all the properties that give it an identity (MAC, WWN, etc) are abstracted and portable.  It’s virtualization to the next level.   Yes, HP and IBM have this capability to but it’s more elegant with UCS.  It’s this abstraction that will open up a number of possibilities in the high-availability and DR realms for us further down the road.  We have plans….

So there you have it.  Nothing earth shattering as for as tactics and strategy go.  UCS happened to come out ahead because Cisco got to start with a clean slate when developing the product.  They also didn’t design it for today, but for tomorrow.

Questions, comments?