Mobility Field 2 Shows Evolving Nature of WLAN Industry

MFD2The “Tech Field Day” series of events has been  an important part of my professional development life for the last several years. I’ve had the good fortune to be a frequent delegate, and I have watched Wireless Field Day (WFD) morph into Mobility Field Day (MFD) in parallel with the changing nature of the WLAN industry. As we get ready to descend upon Silicon Valley for MFD2, I can’t help but think about what this round of vendor participants says about the general state of WLAN things.

This go round, you won’t see the usual suspects many folks think of when contemplating enterprise Wi-Fi. MFD2 is more about performance measurement and alternatives to the WLAN same-old with Mist Systems, Nyansa, Cape Networks, Mojo Networks, and another performance measurement vendor to be announced soon.

So why no bigtime flashy AP makers?

Here’s my take on that, and there are a few contributing factors:

  • The biggest guns have relegated their WLAN parts and pieces to non-headline status. Each has declared “We’re a software company!” of late, and is now devoting time to weaving together Intent-Based Network Fabrics With SDN Flavor Crystals. And… they have their own hyper-glitzy events where non-technical Hollywood-types make attendees swoon. Meh.
  • Extreme Networks is buying up almost everyone else, so the number of competing players is decreasing.
  • Ubiquiti is now #3 in market share, and seemingly needs none of these events to get their message of “economy-priced but half-way decent networking” out to the masses.

By now, WLAN is so tightly integrated with the rest of the network (in most environments) it doesn’t command the stand-alone Wow Factor it once did. But… in the rush to build feature-heavy (I’d even say “gratuitously bloated”, but I can be a wanker about these things) super systems, the big guns haven’t done all that well in natively providing many of the capabilities that MFD 2’s vendors will be briefing us (and those tuning in live) on.

From innovative ways of showing what’s really going on with a given WLAN to to fresh approaches to WLAN architecture (as opposed to butting an API into years’ old code and declaring it new SDN), MFD2 will be interesting.

If you tune in live and would like to get a question to the vendors as they present their stuff, make sure to hit up a Delegate or two via Twitter so we can ask on your behalf.




Cisco DNA SD-Access: Evolution or Identity Crisis?

This blog will make the most sense to those who use (or are very familiar with) both Cisco and Meraki network environments. (Not you? Feel free to leave, but before you go let me at least show you my boat– and yes, it is for sale. OK, now get outta here.) For the rest of you… onward we go.

Get Your SD-Access Mind Right

I’ve been trying to educate myself on Cisco’s latest evolutionary moves, as I happen to be a twenty-year Cisco customer. There’s a lot of energy going on between DNA, SD-Access, THE NETWORK. INTUITIVE. (God that one is just terrible, I’m sorry) fabric this and that, and a procession of other grandly named initiatives. It’s all very fascinating, and impressive to a certain degree. I want to share my impressions on SD-Access specifically, and am curious what others in the game might think of my take on it after you digest all of this.

First- you have to understand what SD-Access is all about. This will get you started if you need a kick-start, but I suggest getting a better look by viewing these Techwise TV episodes.

Now the part about Meraki. After learning about SD-Access, it feels to me that Cisco is trying to somewhat “Merakify” their network approach. SD-Access even starts with the Meraki-style networky map, before continuing in many Meraki-ish ways:

Landing Page
SD-Access Map

Compare to the Meraki map:

Landing Page 2

Meraki Map

The similarities continue- in the videos the presenters enthusiastically talk about doing virtual configurations for equipment that’s not in place yet, etc. Much of this is Meraki 101 in look and feel, but with significant operational differences.

All That’s Good About Meraki, All That’s Bad About Cisco?

As a long-time Meraki customer, I have LOVED not having to deal with the administrative and OpEx pain that comes along with Cisco’s approaches at times. With Meraki there is NO bloated, chronically quirky NMS (like PI), or wireless controllers that have their own history of hardware and code issues. All that’s in the cloud, and someone else’s problem to keep up at upgrade and debug time.

(I am NOT saying Meraki is perfect, by any means. All solutions are trade-offs. I’m only pointing out that the hundreds of man-hours per year in OpEx troubleshooting bugs and such in PI and WLC have not had equal headache on the Meraki side for me.)

With SD-Access, it seems that APIC-EM becomes the on-premise magic that is equal to the magic that Meraki uses out in the cloud, but only for Merakifying traditional Cisco components. So at the end of it all, if you have the right Cisco components, SD-Access will give you a very Meraki-like experience from the admin side.

Now, I do realize that SD-Access does A LOT of stuff, and likely delivers some features that Meraki can’t right now. But..

I actually use daily many of the Cisco components that fall under the SD-Access framework, and they can demand copious amounts of care and feeding. For the Wireless LAN Controllers (just one example), you may have to play several rounds of Let’s Make a Deal with TAC to get code that works good enough in your environment- and the larger your environment, the harder it seems to be for Cisco to test at scale. Having been around this block with Cisco dozens of times, I have no reason to think the underlying culture of bug tolerance and hyper-complexity is probably going to change soon. So often-problematic components becomes part of a new, API-driven architecture? That’s fairly terrifying to me.

At the same time, achieving “The Meraki Experience” is an admirable goal, as using Meraki’s own approach has been fairly fantastic for me, by and large (with only the rare “oh shit” moment along the way.)

The Point

I think it’s awesome that Cisco can try to poach what’s good from Meraki (and visa versa), but it also makes for confusion. If Cisco is trying to be Meraki for access, then what’s Meraki supposed to be at the end of it all? Or will SD-Access be ultimately marketed as “on-premise Meraki” or some such?

Meanwhile, I can’t imagine the inevitable TAC case nightmare that will come when something isn’t working in SD-Access and I have to wade through PoE bugs on switches, any number of problems on WLCs, API debugs and ISE logs to figure out which part of the magic isn’t behaving THIS time around.  For me, if I want a Meraki-like experience, I think I’ll opt to stay with Meraki’s lack of in-house moving parts and give SD-Access a pass- at least until something happens on the Cisco side that convinces me the solution won’t be as buggy as is it’s parts.

Your thoughts? Please share an opinion, as all are valued.

CLUS 2017- The Elephant in My Room

I’m not at Cisco Live in Las Vegas right now, but am living it vicariously through various tweets, podcasts, and similar bursts of real/near-real-time snippets of information from those who are attending. As a Cisco Champion and industry watcher, I’ve also gotten a bit of a whiff of at least some of what’s cooking at CLUS in the form of early briefings and such. There’s no doubt that Cisco is impressing many with promises of “network intuitive” and “intent-based networking”, but there’s also an undercurrent of skepticism trickling out.

Why would would anyone have doubts about Cisco’s next big thing?

For me, I try to look at it from two perspectives- as best I can as a long-time Cisco customer:

  1. What would I think of all of this if I was shopping for a new solution and wasn’t all that familiar with Cisco?
  2. As a long-time Cisco customer, what am I energized about? What is off-putting about the messaging coming out of CLUS?

I would imagine that if I was new to the Ciscosphere, I’d maybe think that this is all very exciting and cutting-edge sounding. Perhaps I’d think that some of this sounds very Avaya-esque in the notion a super-advanced network fabric-y thing that breaks from traditional networking in a number of exciting ways. Maybe I’d get all jazzed about the promise of reduced labor and administrative overhead that comes with doing networking in a whole new way. In other words, I’d probably get the warm fuzzy that Cisco is hoping to create with it’s current full-court press on marketing and dazzle.

But, I’m not new to Cisco, so I can’t do more than cross fingers that this new buzzword-clad architecture actually solves/prevents problems and doesn’t stretch expensive licensing paradigms into the ridiculous. It’s fair to say that I’m seriously jaded. I’ve seen one initiative after another come and go, always with fancy names and high promise.  That’s OK, and I’m not throwing dirt- vendors gotta try stuff, and everything in the IT world evolves. Just don’t expect me to swallow that it will be the end-all. Within a few years, the next big thing will show up.

What I’m NOT OK with is some of what is being presented at CLUS, as it feels incomplete. Many of the promises being made are predicated on an assumed foundation of good code under all of the new magic. As the long-time customer, I see no evidence that Cisco’s own intolerance for crappy code is getting any closer to mine. When bad code hits my environment- and bad code hits frequently- I have to act quickly to get dozens of thousands of users back on track. Cisco seems to not feel the urgency, as churning out problematic code has become routine (in my estimation).

The new stuff HAS to get better. It can’t be built on today’s problems.

I’m not the only one looking at flashy infographics from CLUS and seeing my own edits write themselves into the slides, like these:


I’m really not bitter- just battered and beat up a bit by code (and hardware) problems that suck up hundreds of man-hours a year to get past. I want  to believe that “network intuitive” will be transformative.  But first I need to hear how the underlying culture that has allowed so many problems out the door is going to change. It’s hard to accept that somehow we’re spending too much on OpEx costs and need new network magic to reduce them when a significant portion of those costs come from dealing with code bugs from the vendor that promises the new magic. 

To not address these code shortcomings and their underlying culture straightaway is to already cut into the excitement that should be felt about “network intuitive”.

Sequel: A Week in the Life- Cleaning Up Afterwards- When WLAN Pieces Don’t Live Up to Their Responsibilities

Captain’s log, stardate 170619. I have just piloted the SS Enterprise WLAN out of the Codesuck Nebula after hostilities with both the Switchites and the WAPs. It was a trying 48 hours of lost man-hours cleaning up after a breakdown in WLC update procedures, but I’m glad to be heading home.  Regrettably, we did suffer casualties. Two valiant 802.1ac access points were cut down in their prime (hee hee, Prime).  Ah well, time for an adult beverage and some cheese.

– Captain Beef Wellington, Intergalactic Wi-Fi Warrior

I feel for Captain Wellington. In fact, its impossible to tell his story without revealing a bit of my own. Do you remember this missive about network bits and pieces not living up to their responsibilities? Of course you do. And now that the cleanup work is done from that misadventure, let’s talk about the indirect costs of a code upgrade gone a bit wrong on a large wireless network.

On this particular code upgrade, I did three failover-pairs of WLC. The first one hosts 144 APs. The second, 908 APs. The third currently has 3,212 access points.  All WLC are the same model, had the same starting and ending code, and all APs are uplinked to switches of two different models (but all running same OS version).

The first WLC pair went swimmingly. The WLC pair and 144 access points upgraded in a textbook maintenance maneuver that yielded no surprises.

The second upgraded pair was generally OK, but three APs were orphaned. They seemingly lost their configurations and names, and kept hitting the upgraded controller and falling away. Over, and over, and over, and over, and over, and over. This went on until their switchports were identified, and the interface PoE was cycled. Then TWO came back fully configured, properly named, and code-upgraded, while the remaining AP did upgrade, but lost it’s shit and had to be fully reconfigured.

    • The loss of use of each AP during their little visit to the Muffin Man
    • Around a man-hour and a half or so to locate the APs MAC addresses in switching, deal with the PoE, verify, and configure the lone problem child.

The last and largest environment didn’t go so well for the upgrade. That’s despite the facts that this environment has not changed much since the last upgrade, and that I have done this procedure many a time in the past. Here, around 80ish access points did not take the upgrade. For the math-minded, that’s around 2 1/2 per cent of the APs in this big environment. Many completely dumped their configs and went stupid, some only seemed stupid until PoE resets, about half needed multiple PoE resets (after waiting a goodly period to see if the AP would snap out of it each time), and two completely failed and had to be replaced.

    • The loss of use of each AP during their outage- that’s a lot of capacity denied to end users
    • Because the APs that failed the upgrade were scattered far and wide on several dozen switches, and that many needed to be power cycled more than once, it took at least the equivalent in hours of five full working days at the engineer level to tame the chaos and reconfigure those APs that needed it.
    • Two current-model APs were irrecoverably lost in this process
    • One man-hour per AP to get each replaced

Items of note throughout this:

  • We did the code upgrade in the name of stability and bug fixes on the WLAN side (yes, irony- shut up)
  • We recently learned of a PoE bug or two on the switching side, which may or may not have been in play
  • Top-end gear is not without problems
  • Even “routine” changes can go off the rails, at least in this product set
  • System complexity and scale lead to more indirect costs in the form of support overhead, that’s just a fact of life on certain product sets
  • There is no moving away from bugs, only trading bugs for other bugs- at least in my own reality

And there you have it.



To That Guy From 1990 That I Gave $11 To- I Need a Little Help Here

Let me get two points out there straightaway: this post has nothing to do with technology (remember, this is my mostly wireless blog), and I will be making a pitch to help someone in a need. Feel free to bail now, or read on about a decent young man trying to raise some funds.

Reaching Waaaay Back Into the Time-Karma Continuum

In 1990, I was a newly-married young airman in the US Air Force, living in base housing that was detached from Keesler Air Force Base in Biloxi. Down the road from my neighborhood on the same street was a low-income neighborhood that we called “The Projects”. One day while my wife was at work, I was fixing my bicycle in the driveway when a resident of the projects walked by, and struck up a conversation that quickly moved to him saying “… and that’s why I need some money. I’ll pay you back when I can.” I don’t remember what his story was, but I do remember he was roughly my age, and I really didn’t believe a word he said.

But I also remember thinking that I had a regular payday coming soon, and this guy probably didn’t. I had no idea where the eleven bucks I would give him would get spent, and didn’t really care. I don’t think of myself as an overly “Christian” person, but every now and then when I see someone needing help, I do what I can. This was one of those times. Lots of people have been kind to me through the years, so I try to give a bit back when I have it to give.

Fast Forward to Today, and Introducing Adrian.

I never saw that money again, but I hope it somehow helped that stranger. On the long-shot chance that he’s reading this (OK, it’s a ridiculous long-shot), or anyone else who believes in Paying it Forward, I want to introduce Adrian. If you’re out there, 1990 Eleven Dollar Guy- please consider giving the eleven bucks I gave you back then to Adrian now.

Adrian Adrian is half-way through his BS degree and pilot training at Embry-Riddle, and circumstances have conspired against him to put his professional future in serious jeopardy. He’s an awesome young man, and he’s trying to do everything right despite some sudden financial challenges.

That eleven bucks would help. So would anything that anyone feels compelled to donate to one of the sweetest young men I’ve ever met.

Adrian’s Go Fund Me page is here. If I didn’t know him personally, I wouldn’t be sharing this.

And thank you who read through this for letting me take a time-out from technology to spread the word.


A Day in the Life- When WLAN Pieces Don’t Live Up to Their Responsibilities

I stared into the darkness, and softly spoke
“What the shit is this? Why didn’t it reboot?”
The early morning mocked me
The clouds and the birds and the rising sun
Even my first cup of coffee
All sang and screamed and laughed
“Your stupid WLC didn’t reboot! It didn’t reboot!”
And so I laughed, like an idiot, as not to cry.
Beef Wellington, from The Controller Chronicles

Sigh… Sometimes things don’t do what they’re supposed to do. Like in the case of a simple Cisco 8540 controller upgrade. It matters not that I’ve done this procedure about a hundred times through the years. THIS TIME, the controller had it’s own idea about how this code upgrade would go down.

And Time, a maniac scattering dust,
And Life, a Fury slinging flame.
Tennyson, from In Memoriam

no reboot

The maintenance window was claimed. Change control was done. Code was downloaded to the 8540. And… the required reboot was scheduled May 30, at 0400.

Yet… 05:16 rolled around on May 30, and the reboot was still configured for 0400.

Have you seen the bruise on that man’s head?
   -Professor, on Gilligan’s Island: Waiting for Watubi episode

The bruise is mine. From beating my head against the wall. But whatever… forcibly reboot the 8540 (slightly outside of the maintenance window- but don’t tell anyone). Now all is good- except for dozens of APs that lose their config in the process.

APs default

EXCEPT not ALL of the APs that went to defaults REALLY went to defaults. Only about 20% did. The rest come back proper, with full config, if you remove and restore their power. It makes no difference that they are correctly showing in CDP, drawing good inline power, etc. You’ll reboot if you want them back. That other 20%? They really are defaulted. Build their configs from scratch, and shut up about it.

I need you, I need you, I need you right now
Yeah, I need you right now
So don’t let me, don’t let me, don’t let me down
I think I’m losing my mind now
It’s in my head, darling I hope
That you’ll be here, when I need you the most
So don’t let me, don’t let me, don’t let me down
D-Don’t let me down
Don’t let me down
Chainsmokers, Don’t Let Me Down

Sorry, Chainsmokers. Letting people down is kind of a way of life in/for these parts.

Why You Should Care About MetaGeek’s MetaCare

metageek logoTo the WLAN support community, there are just a few tools that are truly revered. Among these are the various offerings by MetaGeek. I still have my original Wi-Spy USB-based Wi-Fi spectrum analyzer dongle that I used a million years ago when 2.4 GHz was the only band in town, but have also added almost every other tool that MetaGeek offers. Go to any WLAN conference or watch the typical wireless professional at work, and you’ll see lots of MetaGeek products in play. So… is this blog a MetaGeek commercial? I guess you could say so to a certain degree. I decided to write it after my latest renewal of MetaCare to help other MetaGeek customers (and potential customers) understand what MetaCare is all about.

I queried MetaGeek technical trainer Joel Crane to make sure I had my story straight, as MetaCare is one of those things you refresh periodically so it’s easy to lose sight of the value proposition. Straight from Crane:

MetaCare is our way of funding the continued development and support of our products. It’s also a great pun (in my opinion), but people outside of the United States don’t get it. When you buy a new product, you basically get a “free” year of MetaCare. When MetaCare runs out, you can keep on using the software, you just can’t download versions that were released after your MetaCare expired.

On this point, I have let my own MetaCare lapse in the past, then lamented greatly when an update to Chanalyzer or Eye P.A. came available. You have to stay active with your MetaCare to get those updates! Which brings me to Crane’s next point.

When you renew MetaCare, it begins on the the date that MetaCare expired (not the current date). Basically, this keeps users from gaming the system by letting it lapse for a year, and then picking up another year and getting a year’s worth of updates (although I try to not point fingers like that, generally our customers are cool and don’t try to do that stuff). MetaCare keys are one-time use. They just tack more MetaCare onto your “base” key, which is always used to activate new machines.
Like any other decent WLAN support tool, you gotta pay to play when it comes to upgrades. At the same time, I do know of fellow WLAN support folks who have opted to not keep up their MetaCare, and therefor have opted out of updates. Maybe their budget dollars ran out, or perhaps they don’t feel that MetaGeek updates their tool code frequently enough to warrant the expenditure on MetaCare. As with other tools with similar support paradigms, whether you use to pay for ongoing support is up to you. But I give MetaGeek a lot of credit for not rendering their tools “expired” if you forego MetaCare.
Crane also pointed out one more aspect of the MetaGeek licensing model that is actually quite generous (other WLAN toolmakers could learn something here!):
 Speaking of base keys, they can be activated on up to 5 machines that belong to one user. Each user will need their own key, but if you have a desktop, laptop, survey laptop, a couple of VM’s… go nuts and activate your base key all over the place. 

And now you know. As for me, my MetaCare costs are a business expense that I don’t mind paying- and I’m really looking forward to new developments from MetaGeek.

But wait- there’s more! Thanks to Blake Krone for the reminder. MetaGeek has a nice license portal for viewing and managing your own license keys, so you don’t have to wonder where you stand for available device counts, license expiration, etc.