Early January marked my one year anniversary with VCE. I was hired to be a Program Manager on the virtualization team and my first project to lead was bringing vSphere 5 to the world of Vblocks. I didn’t think this would be as difficult as it turned out to be. I knew I would be herding cats, but I didn’t plan on herding cats from outside the herd. About midway through the project, both Cisco and EMC informed us that they weren’t going to certify vSphere 5 on older levels of firmware. In the case of Cisco, this meant the we were going to move all the Vblock platforms to UCS 2.0. For EMC, it meant upgrading the firmware for all our supported storage arrays. In essence, I was actually leading a project to upgrade all the components in a Vblock platform.
If I do say so myself, I did a great job. But it wasn’t just me. I worked with a great team of engineers, tech writers, product managers, trainers, and more. This was truly a cross-functional project and involved over 50 staff across three companies by the time the project completed.
In this same one year, VCE had gone through tremendous change. When I hired on, VCE was basically a startup. About midway through the year, a certain level of operational maturity was needed. We had achieved significant growth, both in sales and in head count. Thus began a series of reorgs. There was basically one large reorg and a series of refining reorgs. In my case, I went through two refinements.
The first reorg moved the virtualization team in with the rest of the Product Management team. It also moved the bulk of our engineering staff into one engineering group. This was a smart move as it removed barriers to introducing new product. Unfortunately for me, all Program Managers were moved into a formal Program Management Office. While I did a great job on the vSphere 5 project, I found that this wasn’t the position for me. Luckily, my managers recognized my talents and kept me on the virtualization team, which was now part of the Product Management group.
As the vSphere 5 project was winding down, the virtualization team was disbanded and we moved into the direct chain of Product Management. Again, not a bad idea but it did leave me in a bit of limbo since Product Management does not have a need for a Program Manager. Again, I got lucky. The director of Product Management recognized my abilities in the areas of process management, barrier breaking, and general mayhem. A product management operations team was created and I was assigned to it. Our charter is simple: keep things moving. Think of us a “fixers”. If a project is in trouble, we show up and get it back on track. If someone is not getting things done in a timely manner, we will. We are also developing various policies, processes, and procedures for the Product Management team as well as working with other teams inside of VCE to develop company-wide policies and processes.
It’s been interesting to me because I am being exposed to areas of the business that I have not had previous exposure to. For example, I am working with the marketing group on website redesign and developing launch materials. I am also working with our supply chain managers on setting appropriate stocking levels.
I’ve had an exciting first year. I’m betting the second is going to be even better.
Got this in an email from WordPress. Pretty cool that they keep stats like this. Here’s how I did for 2010.
The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:
The Blog-Health-o-Meter™ reads Fresher than ever.
A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 3,600 times in 2010. That’s about 9 full 747s.
In 2010, there were 33 new posts, not bad for the first year! There were 22 pictures uploaded, taking up a total of 8mb. That’s about 2 pictures per month.
The busiest day of the year was October 27th with 104 views. The most popular post that day was Does ESX lack storage resiliency?.
Where did they come from?
The top referring sites in 2010 were twitter.com, vlp.vsphere-land.com, thevpad.com, definethecloud.net, and en.wordpress.com.
Some visitors came searching, mostly for datacenter, matt mancini vmware, pdu, cisco ucs, and ucs f0327.
Attractions in 2010
These are the posts and pages that got the most views in 2010.
Does ESX lack storage resiliency? October 2010
Week One of Cisco UCS Implementation Complete July 2010
My Thoughts on Our Cisco UCS Sales Experience August 2010
About April 2010
First Impressions of VMware CapacityIQ October 2010
The recession has brought about a few major changes in sales/marketing techniques in the technology industry. There was a time when only executive management was wined and dined and the common man was left out in the cold. Well my friends, that time is no more.
Over the last 18 months or so, I have been invited to more lunches and activity-based events that I have in my 20+ years in the IT industry. The two (lunches and activity based events) can be broken down into two categories of providers: those selling really expensive products and those with not-so expensive products.
Those in the really expensive product category usually are storage providers. Since these systems can easily reach into the hundreds of thousands of dollars, the sales/marketing experience has to be equally impressive. As such, the event most often chosen is lunch at an upscale steak restaurant such as Ruth’s Chris or Fleming’s. The typical event consists of a small presentation (usually under 30 minutes) followed by a lunch from a scaled-down menu. Even though the menu is scaled down, the quality of the food is not; the reputation of the restaurant is still on display.
In the not-so expensive category, we typically find VARs and small product vendors. The event of choice in this category is entrance to something with mass appeal such as a blockbuster movie’s opening day. As with the lunches, the event begins with a 30 minutes presentation and then the movie begins. This type of event has become so pervasive that I recently had three invitations to see Iron Man 2 at the same theater on the same day (all at different times).
I don’t go to the lunches very often because I feel it is disingenuous to take advantage of something so expensive for no return. I only attend when I have a budgeted project. I’m also careful to keep track of the “promoter”. Some promoters are very good at setting up the presentations so that real information is imparted. Others are there just to get butts in the seats and the presentations tend to suffer for it. While I enjoy a good meal, I don’t want to waste my time. However, I do partake in some of the movies since they usually take place on a Friday (my day off) and I use them to network with the VAR and other IT professionals.
Other events in the expensive category:
- Tickets to major golf tournaments
- Tickets to basketball games
- Tickets to concerts
Other events in the not-so-expensive category:
- Tickets to baseball games (many can be bought in volume for under $10 each)
- Kart racing (fast go-karts)
- Lunch and games at a large entertainment venue such as Dave & Busters
What else have you seen? Anything outrageous?
Have you ever had a major systems failure that could be classified as a disaster or near-disaster? How did your vendor(s) of the failed systems respond? Did they own up to it? Obfuscate? Lay blame elsewhere? Help in the recovery?
Back in the fall of 2007, we had an “event” with our primary storage array. I remember it as though it occurred yesterday. I was coming home from vacation and had just disembarked from the airplane when I received a call from one of our engineers. The engineer was very low-key and said that I might want to check in with our storage guys because of some problem they were having. “Some problem” turned out to be a complete array failure.
I went home, took a shower, and then went to the office. First thing I saw was the vendor field engineer standing near the array looking very bored. A quick conversation ensued in which he told me he was having trouble getting support from his own people. Uh-oh.
A few minutes later I found our storage folks in an office talking about various next steps. I was given some background info. The array had been down for over five hours, no one knew the cause of the failure, no one knew the extent of the failure, and no one had filled in our CIO on the details. As far as she knew, the vendor was fixing the problem and things were going to be peachy again.
At this point, alarm bells should have been going off in everyone’s head. I tracked down the vendor engineer and gave him a hard deadline to get the array fixed. I also started prepping, with the help of our storage team, for massive recovery efforts. The deadline came and the vendor was no further along so I woke up my manager and told her to wake up the CIO to declare a disaster.
Along comes daylight and we still haven’t made any progress on fixing the downed array, but we have started tape restoration to a different array. A disaster is declared. Teams are put together to determine the scope of impact, options for recovering, customer communications, etc.. We also called in our array sales rep, his support folks, 2nd/3rd level vendor tech support, and more.
So here we are all in a room trying to figure out what happened and what to do next. 3rd level vendor support is in another part of the country. He doesn’t know what’s been discussed so he tells us what happened. Unfortunately this was not the party line. Vendor would like to blame the problem on something different; something that was supposedly fixed in a firmware update that we hadn’t yet applied (thus the finger-pointing begins). Not a bright idea since we had the white paper on that particular error and we were nowhere close to hitting the trigger point. Months later this so-called fixed problem was corrected, again, in another firmware release.
To make matters worse, while discussing recovery options one of the vendor’s local managers said..and I quote..”It’s not our problem”. Wow!!! Our primary storage provider just told us that his products failure was not his problem. Yes, we bought mid-range equipment so we knew we weren’t buying 5 nines or better. Still, to say that it’s our fault and to tell us that we should have bought the high-end, seven-figure system was a bit much.
We recovered about 70% of the data to another array within 36 hours and then ran into a bad tape problem. The remaining 30% took about two weeks to get. Needless to say, we learned a lot. Our DR processes weren’t up to snuff, our backup processes weren’t up to snuff, and our choice of vendor wasn’t up to snuff. We are in the process of correcting all three deficiencies.
Back to my opening paragraph, how have your vendors treated you in a disaster?