Category Archives: WLAN

Contemplating APIs and the WLAN State of Things

Having just attended the 2019 Wireless LAN Professionals Conference (WLPC), I got a few days full of really interesting perspective from other WLAN doers. I saw and heard predictions, hopes, and fears for what comes next as we roll toward 802.11ax, the coming of 6 GHz spectrum to Wi-Fi, and more widespread use of WPA3. There was a lot of good chatter, because there simply is no conference like WLPC (I recommend it to anyone who is in WLAN practice/management, or over those who who are).

One thing I heard A LOT about was APIs. And using Python to get more out of our WLAN hardware and management systems. And… how “you should all learn to do coding!” I have no issues with any of these, but I also tend to be a 10,000 foot thinker and so couldn’t help but ponder the real-world implications of all that when it comes to how wireless systems are actually run day-to-day. I also found that I wasn’t alone in my contemplation in talking with others at the event.

Let me get right to my points- I have great appreciation for the flexibility and capabilities that using APIs can bring to a WLAN system. But… that is balanced by a number of concerns:

  • If a vendor has historically put out crappy code that is developer-driven versus engineer-driven, how do we trust the developers to get it right when it comes to what data awaits engineers at the end of the APIs?
  • I fear that “and we have an API!” can become a cop-out for NOT putting out a complete enough NMS system for the high costs that you’ll still pay for these NMS systems. As in… “oh THAT feature is leveraged by the API”, and not in the expensive management GUI that maybe now is missing common-sense basic functionality.
  • In some ways APIs-to-the-rescue is a huge step forward, in other ways it’s an admission that vendors sometimes can’t build an NMS that doesn’t suck (because if they could, maybe we wouldn’t need APIs?) Maybe…
  • Not all WLAN staff teams will want to be in the programming business. Time will tell if they will be able to work effectively as they avoid the API and try to stick with the NMS and non-API tools.

None of this is necessarily my own strict opinion as I digest everything I’ve seen and heard at this year’s WLPC, but I heard enough from other people to know that the community is not in lockstep embrace of “API all the things”. Some teams are just stretched thin already, and pay a good buck for vendor tools so they don’t have to become programmers to keep their WLANs on the rails. Then there’s the always-relevant “evolve or watch your career die” school of thought that can’t be ignored either.

Fascinating times. Much change is in the air.

Now onto one of the most interesting things of all that I heard at WLPC: more on Open Config. Mike Albano from the Enterprise side at Google planted some fascinating seeds back in 2017 with a presentation he did at that year’s conference:

Introduction to OpenConfig; What Is It, What Does It Mean To Wi-Fi | Mike Albano | WLPC 2017 Phoenix from Wireless LAN Professionals on Vimeo.

Mike was on the stage again this year doing a little follow up on progress made with Open Config. He also participated in a Whiskey and Wireless Podcast with a couple of nicely-hatted lunatics and shared even more with an eager audience. I suggest you keep an eye out for both his recorded WLPC presentation and the podcast to come live (I’ll add the links here as well), because Open Config is the API concept on steroids. As mentioned in the 2017 video, but expanded on this year, Open Config seeks to make the software side of many vendors’ wireless offerings largely irrelevant. You gotta hear it.

Given that we’re in an era where WLAN vendors have declared themselves “software companies” who happen to put out some pretty crappy software and then charge through the nose for it, Open Config is interesting for reasons far beyond it’s API-ness.

Like I said, these are fascinating times.

Enhance Your Wi-Fi Mojo With Old-School Radio Hobbies

I have this odd love of some really arcane signals. With a modest but decent receiver from Tecsun (the PL-880), I take advantage of the winter months in the northeast (less atmospheric electricity and no thunderstorms) to “hear” these quirky Longwave signal churn out slow Morse Code identifications. It’s utterly addicting to the right-minded radio geek, and also draws large parallels to what goes on with Wi-Fi that help reinforce my WLAN foundational knowledge.

For wireless networks , we know that output power, antenna choices, the environment where we’re operating, and the capabilities of the client devices all contribute to whether Wi-Fi is “good” or “bad”. If the signals can’t get through, then the microprocessors involved can’t turn those signals into data. Let’s talk about what it feels like to listen to NDBs for a bit, then how that relates to Wi-Fi.

I live about an hour south of Lake Ontario in the middle of New York state. With my beloved Tecsun PL880, I recently received an NDB signal from Pickle Lake’s little airport in Ontario Canada. This location happens to be several hundreds of miles away. The beacon transmitter (considered a “navigation aid”) at the airport generates a fairly low-power cone of  signal into the sky, more or less straight up (that’s the non-directional part of “NDB”). The intelligence in the signal is simply slow Morse Code continuously looping the letters Y-P-L. See this link for  information on the airport.

Pickle Lake

Given that any beacon is typically low powered and pointed straight up, finding them on the air from afar is a sport unto itself. Longwave spectrum sits below the AM broadcast band, way down where frequencies are measured in kilohertz.  It’s absolutely cluttered with man-made signals, and is at the mercy of natural electrical interference, like lightning strikes (called “static crashes” int the radio world). Yet I was able to discern that slow Y-P-L signaling from across a huge Canadian province and a Great Lake, making it an accomplishment as a signal-chasing radio hobbyist.

If you’re not familiar with Morse Code, that Y-P-L formats like – . – – / . – – . / . – . . (dash-dot-dash-dash/dot-dash-dash-dot/dot-dash-dot-dot).

In 802.11 WLAN, specialized modulation helps to ensure that the important signals prevail despite RF conditions being crappy enough to kill narrow-band signals. I see Morse Code is somewhat akin to spread spectrum when I’m chasing NDBs as the dots and dashes can often be heard through really bad conditions that would utterly destroy voice signals. (This is actually why Morse Code was created and used as a mainstream long-distance radio communications mode for so long.)

When Wi-Fi signal quality is degraded, data rates will decrease. When I hunt down NDBs like Y-P-L signal, I might have to listen to each for several minutes and manipulate the filters on my receiver before I know what I’m actually hearing- and sometimes I just can’t quite clarity.  For this and other radio activities, my own ears and mind are the actual microprocessor. Call me silly,  but each beacon identified is like catching a nice fish and brings it’s own little flicker of excitement. Here’s a great list of Longwave NDBs out there to chase, and there are many other lists to be found online.

For improved reception, I could connect my PL880 at to a better antenna, just like in Wi-Fi. I could improve my “data rates” (or words-per-minute copying) by using better filters and practicing my Morse Code more. This would make me a better “microprocessor” in this activity.

Really geeky stuff, eh? I have no problem wearing that label. I also know that there are other radio nerds out there in the WLAN community, as well as those who want to learn more about radio “stuff” beyond Wi-Fi. For those folks, I’ll be teaming up with Scott Lester to present “Radio Hobbies for the WLAN Professional at the 2019 WLAN Professionals Conference. Sign-ups start mid-December, and I hope to see many of you there!


Stop the Little White Wi-Fi Lies- Data Sheet Specs Matter

There I sat in a pleasant regional users meeting for a large networking company.  It was a decent presentation that provided me with some food for thought, and so I was glad I went. But there was one statement made while talking about pending 802.11ax access points that raised my dander.

I’m paraphrasing here…

The data sheet will say the AP can do over 1,000 clients, but you know how that is…”

Hmmm. There was some discussion in the room after that statement- I asked how we as Wireless Doers are supposed to reconcile these grand claims that WE all know are bullshit with the expectations of CUSTOMERS who DO NOT recognize the same info for the operational untruth that it is.

“It’s theoretical”

“Everybody does it”

Again… hmmm. Lee’s not buying it. A lie that we all choose to live with is a still a lie. This isn’t even the biggest whopper out there… Another vendor right now is touting an AP that can do FIFTEEN HUNDRED CLIENTS!

So all I need is a dozen per stadium and I’m the most efficient LPV wireless guy in the land, no? I can design my networks for 1 AP for every 1000 (or 1500) Client devices and reduce my AP spend significantly! All right! Except it doesn’t work this way.

Why do I care, really? What about this one little falsehood got me perturbed? Because we spend money based on what data sheets tell us. It’s insanity to SEE one number, but then have to go ask someone else what that one number REALLY means. Let me tell you a couple of stories of data sheet burn that I still carry scars from.

When 10 Gig Is Not


This  screen grab comes from a leading vendor’s now EOL wireless controller. 10 Gbps is clearly stated as what the controller will “do”, at least by my interpretation. Nowhere in the spec sheet does it say “…unless you run a highly desirable feature called Application Visibility and Control, which then knocks the unit’s throughput capabilities to well under 3 Gbps“. That little gem you have to discover for yourself and suffer through… while 20K wireless clients get pissed off as the WLAN core melts down. No “if this, then that” qualifiers to explain that a popular feature would neuter your throughput by an order of magnitude- just “10 Gbps”. I fell for it, and got burned bad.

Is 3,000 APs + 802.1X Significant, or No?

Same vendor, beefier controller.
In the midst of another support case that impacted multiple users, the TAC person said something like “I see you have over 3,000 APs and are doing 802.1X…” with great concern in his voice. I asked- “So? Is this a problem on a controller that supports 6K APs?” The fellow put me on hold for several minutes to talk with somebody else about the point. Meanwhile, a colleague in another part of the world sent an email raising the same flag on one of his own support cases- there seemed to be a common TAC-side fixation with 3K APs and 802.1X on a controller that is rated for 60K clients. My TAC guy eventually came back and said “um, no, that should be OK” in a voice that didn’t exactly inspire confidence, and it immediately hearkened me back to the the great meltdown on the other controller. The point was raised yet again by another support person as the case played out, who also avoided explaining why this seemed to be of concern when I asked.

I still have no idea whether 3K APs and 802.1X are the ingredients for an eventual meltdown on this controller, or whether perhaps inexperienced support engineers talked out of school. Given my past experiences on this product line (I’ve only mentioned the tip of the iceberg here), my confidence was very much shaken by the thought of some sort of undeclared 3,000 AP “wall” that I had hit. (A code upgrade, or rather multiple code upgrades, eventually got me past whatever the original problem was in this case.)

To me, the data sheet is gospel as presented – if there are exceptions, caveats, qualifiers, or whatever- the vendor needs to get it out there ON THE DATA SHEET. My end result- I have little confidence in ANYTHING to do with spec from this vendor on this product set.

Speaking of exceptions, caveats, qualifiers, or or whatever…

The Enterprise WLAN vensors can actually learn from the “little guys” when it comes to technical honesty. Have a look at what Amped Wireless includes on their data sheets: 

Specifications are subject to change without notice.

1 Range specifications are based on performance test results. Actual performance may vary due to differences in operating environments, building materials and wireless obstructions. Performance may increase or decrease over the stated specification. Wireless coverage claims are used only as a reference and are not guaranteed as each wireless network is uniquely different. Maximum wireless signal rate derived from IEEE 802.11 standard specifications. Actual data throughput may vary as a result of network conditions and environmental factors. Output power specifications are based on the maximum possible radio output power plus antenna gain. May not work with non-standard Wi-Fi devices such as those with proprietary software or drivers. Supports all Wi-Fi standards that are compatible or backwards compatible with 802.11a/b/g/n/ac Wi-Fi standards.

2 All transmission rates listed, for example 800Mbps for 2.4GHz and 1733Mbps for 5GHz, are the physical data rates. Actual data throughput will be lower and may depend on external factors as well as the combination of devices connected to the router. AC2600 wireless speeds are achieved when connecting to other AC2600 capable devices.

3 May not work with non-standard Wi-Fi routers or routers with altered firmware or proprietary firmware, such as those from third party sources or some Internet service providers. May not work with routers that do not comply with IEEE or Wi-Fi standards.

4 For MU-MIMO to work, additional MU-MIMO capable devices must be connected to the network.

You can argue that no one reads the fine print, but I would disagree. As is, deception and partial truths are problematic and confusing. What else on the data sheet can’t be trusted? And why do it, at all? Seriously- why say an AP will do 1000+ clients? Where is the “win” for anybody other than the Chief Embellishment Officer?

What Wi-Fi Tools are MetaGeek and Oscium Cooking Up Together?

As I write this, the 2018 Wi-Fi Tek Conference is going on in San Diego. I’m not attending (mostly because Boardman is there) but I am listening to various comments being made about the event goings on though the many channels that all of us WLAN types keep each other updated on. There’s a lot of good chatter, and I wish my CWNP family the best of luck with conference (I am on the CWNE Advisory Board you know… I run in those circles.) One little nugget from Twitter caught my attention, in particular.


I happen to have products from each company, and both are among my favorite tools when it comes to WLAN support. After the tweet, I went and found MetaGeek’s own announcement on the new partnership, which you can read about here.

Oscium Logo


Now, betwixt you and I- neither company has been especially active of late as far as getting new tools (or even updates to existing tools) out in front of us loyal customers, and I’m glad to see hope of that changing.

I’ve written about Oscium in the past and still think their WiPry 5x is one of the slicker spectrum analyzers out there for those of us that have familiarity with real lab-grade spec ans. I’ve also covered MetaGeek through the years, and was fortunate enough to see their presentations at multiple Tech Field Day events. You won’t find nicer folks than MetaGeek’s current and past employees… must be a Boise thing.

Now back to that announcement of a partnership between MetaGeek and Oscium. We still don’t know a lot, but this is pivotal from the MetaGeek blog:

MetaGeek plans to partner with Oscium for additional hardware offerings moving forward as part of the company’s shift to focus on the software side of their industry-leading Wi-Fi analytics solutions.

Just as Ekahau has realized, you can only take legacy USB adapters so far in the world of 802.11ac (and soon .11ax) wireless support tools. MetaGeek has had profound impact on the WLAN industry with their USB-based stuff, but it also became stunted despite having really effective software pairings (like Channelyzer, InSSIDer, and the fantastic Eye P.A.). Oscium has figured out how to leverage well a range of mobile devices (both Android and Apple) and their latest connectors for use as Wi-Fi support specialty tools.

I smell synergy, baby…

I have seen nothing in beta as for as this story line goes. I’ve had no conversations of late with either MetaGeek or Oscium, so I really can’t give you anything beyond speculation and hope that good things are coming, but I also have a lot of faith in both companies.

I’m looking forward to the end of the year, and whatever announcements these two toolmakers are working on.

Say Hello to Ooklahau

ooklahau 3 If you’ve been in the business of professional wireless networking for any amount of time, you no doubt have at least a familiarization with Ekahau. For many of us, our networks would not be what they are today if it weren’t for the long-running design and survey reliability and excellence baked into Ekahau’s magic. I’ve been a customer for somewhere around 15 years, and the Ekahau experience with both predictive designs and active surveys has only gotten better with each release. The addition of Sidekick to the ESS suite was a game-changer, and the future looks bright for this Finnish company who also happens to be well-connected to their end users, open to ideas for product improvements, and… well, downright fun to work with.

ooklahau 1Then there’s Ookla- the Seattle-based people that pretty much anybody and everybody on the planet with a connected to device has likely used at some point. They have a huge end-user facing presence with their speedtest apps, but also an impressive global presence that services enterprise customers as well. Ookla started in 2006, and has been growing their cloud-based service offerings and brand -recognition ever since.

Let’s not be coy… you know where this is going. Despite my cheesy logo play, a name change IS NOT imminent to either company. But Ekahau has been acquired by Ookla, as you can read about here on Ekahau’s own blog. I did get a chance to talk with my pal Jussi Kiviniemi (Senior VP for Solution Strategy and Customer Experience) at Ekahau about the news just moments before writing this.

Customers can expect Ekahau to stay largely the same operationally for the foreseeable future, but behind the scenes the global human and technical resources of Ookla are going to mean good things over time. Jussi was practically beaming, even over the phone. This is going to make for really interesting days ahead for wireless and network performance testing for sure, and could enable some pretty fascinating things on the design side when the cloud aspect is figured in.

Congrats, Ekahau! Well done, and well-deserved.

Catching Up With NETSCOUT at MFD3, Big News, and “Body Fade” Explained

Touching Base at Mobility Field Day 3

Everybody’s favorite handheld network tool tester provided updates on their G2 and AirMagnet tools at Mobility Field Day 3. NETSCOUT hosted those of us in attendance at their San Jose office, while simultaneously live-streaming to a lot of interested folks out on the interwebs. We heard about product evolutions coming to the AirCheck G2, the LinkRunner G2, the very handy Link-Live web service, and a little bit on the AirMagnet product line. The G2 improvements are incremental, well-designed, and show that NETSCOUT is not letting grass grow under it’s flagship testers. The AirMagnet brief sounded a bit apologist and fairly thin, but also not unexpected given that the line has gone almost stagnant for long periods of time.

You can watch the presentations for yourself here.

Big News

This one took us by surprise… It’s a bit weird to find out only a couple of days after being at Netscout’s offices that the very product line we were discussing has been sold off to Nacho Libre… or is it StoneCalibre? Whatever… it just feels funky to those of us who know and love our AirCheck and LinkRunner products.  What goes in this move?

  • LinkSprinter
  • LinkRunner (AT & G2)
  • AirCheck
  • OneTouch AT
  • AirMagnet Mobile (Spectrum, Survey, Planner, Wi-Fi analyzer)

Hopefully whoever this new backer is does not mess with all that’s good in the toolbox, and either breathes new life into AirMagnet or retires it. Read about the acquisition here.

Netscout HQ

What the Heck is Body Fade?


During the MFD sessions, we heard about several improvements- including refinements to the AirCheck G2’s Locator Tool. I tweeted out my recent success with the tool, and suggested that anyone using become familiar with “body fade” as technique to make the locator tool even more effective.

A couple of folks gave a thumbs-up, retweet, or similar affirmation, but one fellow emailed me to ask “what are you talking about with body fade?”  Let’s talk about that just a little, using a real-world case from my adventures in G2 Land.

The notion of body fade comes into play in any situation where you have a hand-held receiver in your hand (like the AirCheck G2 or a small ham radio with a bandscope display) and are trying to locate the origin of a signal of interest. By putting my body- including my rock-hard abs- between the signal and the tester, you can make the signal strength drop enough to notice. That means that the signal is somewhere behind you… do this enough times, and you get a really good sense of where to go look for the device faster than just running around staring at the dancing signal needle.

In my example, we see this rascally rogue running rebellious somewhere in another part of my building:
locate5By golly, that’s not one of mine. We gotta find the interloper and teach him or her some manners, I tellya. I fire up the AirCheck G2, invoke the locate option, and see what I see in my office.
Not so impressive yet. We have a fairly weak signal somewhere. But how to get started on this foxhunt? BODY FADE to the rescue. I hold the G2 in front of my Adonis-like physique and slowly turn (the slowly part is important)… until I see a 3-4 dBm DROP in signal strength. This is my body inducing loss to the signal and thus showing you where to turn around and what direction to walk towards…

OK… so I start walking, and I’m making progress. The signal is getting stronger, and I use body fade to help further refine my path. But alas- I hit an obstacle! Once I get to THIS signal strength, I’m bamboozled:

Locate 3Nothing I can do from the spot of this reading with body fade changes the signal strength at all. If I walk away from the spot in any direction, the signal drops, but it is strong in this one spot. Yet the rogue is absolutely not there (in a hallway). What gives?

Remember that we’re dealing with signaling in three dimensions. When body fade at X-marks-the-spot yields no changes in signal strength, it means it’s time to go upstairs or down. In my case, there is no downstairs, so up I went. I picked up the trail, and soon hit the jackpot:
This was screen-shotted in the doorway of the office where the offending device was found. After roughing up both the rogue router and the gent who dared to plug it in, balance was restored to The Force.

Body fade is pivotal to some really neat radio hobbies- like this one.





Figuring Out What Bothers Me About Wi-Fi and “Analytics”

I’ve been to the well, my friends. And I have drank the water. 

I was most fortunate in being a participant in the by-invitation Mobility Field Day 3 event, this past week. Few events get you this close to so many primary WLAN industry companies and their technical big-guns, on such an intimate level and on their own turf. For months leading up to MFD3, something  has been bothering me about the discreet topic of “analytics” as collectively presented by the industry- but I haven’t been able to nail down my unease until this past week.

And with the help of an email I received on the trip back east after Mobility Field Day was over.

Email Subject Line: fixing the wifi sucks problem

That was the subject in the email, sent by an employee of one of the companies that presented on their analytics solution at MFD3 (Nyansa, Cisco, Aruba Networks, Fortinet, and Mist Systems all presented on their own analytics platforms). The sender of this email knew enough about me to do a little ego stroking, but not enough to know that only a matter of hours earlier I was interacting with his company’s top folks, or that I’ve already had an extensive eval with the product he’s pitching at my own site. No matter… a polite “no thanks” and I was on my way. But his email did ring a bell in my brain, and for that I owe this person a thank you.

The subject line in that email set several dominoes of realization falling for me. For example-  at least some in the WLAN industry are working hard to plant seeds in our minds that “your WLAN sucks. You NEED us.” Once that hook is set, their work in pushing the fruits of their labor gets easier. The problem is, all of our networks don’t suck. Why? These are just some of the reasons:

  • Many of our wireless networks are well-designed by trained professionals
  • Those trained professionals often have a lot of experience, and wide-ranging portfolios of successful examples of their work
  • Many of our WLAN environments are well-instrumented with vendor-provided NMS systems, monitoring systems like Solar Winds and AKIPS, and log everything under the sun to syslog power-houses like Splunk
  • We often have strong operational policies that help keep wireless operations humming right
  • We use a wealth of metrics to monitor client satisfaction (and dis-satisfaction)

To put it another way: we’re not all just bumbling along like chuckleheads waiting for some Analytics Wizard in a Can to come along and scrape the dumbness off of our asses.

In all fairness, that’s not a global message that ALL vendors are conveying.  But it does make you do a double-take when you consider that a whole bunch of data science has gone into popping up a window that identifies a client that likely needs a driver update, when those of us who have been around awhile know how to identify a client that needs a driver update by alternate means.  Sure, “analytics” does a lot more, but it all comes as a trade-off (I’ll get into that in a minute) and can still leave you short on your biggest issues.

Like in my world, where the SINGLE BIGGEST problem since 2006, hands-down and frequently catastrophic, has been the buggy nature of my WLAN vendor’s code. Yet this vendor’s new analytics do nothing to identify when one of it’s own bugs has come to call. That intelligence would be a lot more useful than some of the other stuff “analytics” wants to show.

Trade-Offs Aplenty

I’m probably too deep into this article to say “I’m really not trying to be negative…” but I’ll hazard that offering anyways. Sitting in the conference rooms of Silicon Valley and hearing from many of the industry’s finest Analytics product’s management teams is impressive and its obvious that each believes passionately in their solutions. I’m not panning concepts like AI, machine learning, data mining, etc as being un-useful as I’d be an idiot to do so. But there is a lot of nuance to the whole paradigm to consider:

  • Money spent on analytics solutions is money diverted from elsewhere in the budget
  • Another information-rich dashboard to pour through takes time away from other taskings
  • Much of the information presented won’t be actionable, and you likely could have found it in tools you already have (depending on what tools you have)
  • Unlike RADIUS/NAC, DHCP/DNS, and other critical services, you don’t NEED Analytics. If you are so bad off that you do, you may want to audit who is doing your network and how

Despite being a bit on the pissy side here, I actually believe that any of the Analytics systems I saw this week could bring value to environments where they are used, in an “accessory” role.  My main concerns:

  • Price and recurrent revenue models for something that is essentially an accessory
  • How well these platforms scale in large, complicated environments
  • False alarms, excessive notifications for non-actionable events and factors
  • Being marketed at helpdesk environments where Tier 1 support staff have zero clue how to digest the alerts and everything becomes yet another frivolous trouble ticket
  •  That a vendor may re-tool their overall WLAN product line and architecture so that Analytics is no longer an accessory but a mandatory part of operations- at a fat price
  • Dollars spent on big analytics solutions might be better allocated to network design skills,  beefy syslog environments, or to writing RFPs to replace your current WLAN pain points once and for all
  • If 3rd party analytics have a place in an industry where each WLAN vendor is developing their own

If all of that could be reconciled to my liking, much of my skepticism would boil off. I will say after this last week at MFD3, both Aruba and Fortinet did a good job of conveying that analytics plays a support role, and that it’s not the spotlight technology in a network environment.

Have a look for yourself at Arista,  Aruba, Cisco, Fortinet, Mist and Nyansa telling their analytics stories, linked to from the MFD3 website.

Thanks for reading.