Category Archives: Cisco Networks

Its Time to Let the WAP Rage Go and Move On

It’s an often-repeated cycle: someone says WAP in reference to a wireless access point, while those of us who consider the device to be an AP (no W) recoil viscerally. Maybe a lecture ensues about the PROPER way to refer to an WLAN access point, and it’s not uncommon to get a lot of YEAH WHAT HE SAID! and maybe some DAMN RIGHT! thrown in as we all work ourselves up to a froth over this oddball, seemingly important topic.

I said “seemingly” important.

Except it’s not. It’s actually kind of snobby, and kind of foolish. Don’t we have bigger things to worry about?

Those we quibble with in the epic WAP vs AP Thousand Years’ War often come from different backgrounds, where they learned that WAP is correct, prudent, and A-OK. One example- I work with a really smart BICSI-certified RCDD (that’s Registered Communications Distribution Designer, kind of like the CWNE of the wiring world) and guess what? He learned that WAP is a standard term on his professional journey. BICSI’s ICT Terminology Handbook uses WAP no less than 14 times!

Then there’s the WLAN market leader- Cisco. “WAP” occurs in enough Cisco documentation to be considered a valid term, at least by me. An example:

Are you AP-Purists feeling silly yet? I fully realize that this is one of those religious debates that polarizes people. Some of us will NEVER stop clinging to AP is good and WAP is evil because the notion has become ingrained in the fabric of their WLAN professional beings.


Wikipedia says both are OK. If you Google <Wireless Access Point WAP> you’ll find well over a million results, many from vendors who call their stuff WAPs.

This shouldn’t be one of those triggers that make us drop what we’re doing to school our fellow men and women about what they SHOULD call a wireless access point, yet it is. It doesn’t make us look smart or superior. Au contraire, it makes us look kinda petty, closed-minded, and dare I say silly.

Just stop it already.

Chasing Down Errant Cisco APs

Some product sets definitely require more care and feeding than others… that’s all I’ll say in that regard lest I let go with the rant that is on the tip of my tongue. What I’m about to present is in regard to Cisco 3702 access points specifically on code, although I have no doubt the condition applies to many models and code versions.

Problem statementThe freakin’ APs cut and run. They go over the wall, but they are real sneaky about it. They do it in a way that ain’t so easy to detect… Or in Cisco’s own words: “As per FN70330 – IOS AP stranded due to flash corruption issue, due to a number of software bugs an AP in normal operation,  the flash file system on some IOS APs may become corrupt over time. This is seen especially after an upgrade is performed to the WLC but not necessarily limited to this scenario. AP may be working fine, servicing client, etc, while on this problem state which is not easily detectable”. 

See this Cisco doc as the source of the above statement– and please know that I’m not saying that MY issue is absolutely THIS issue. Although it could be. There are are many fine bugs to choose from.

What it Looks Like, and What it Doesn’t Look Like.

Cisco rightly says that the “problem state is not easily detectable”, and I agree. We’ll focus on a single 3702 AP for this blog, but I know from first-hand conversation that some folks have been bitten by dozens or hundreds of similar free=spirited APs all going for an intent-based spontaneous joyride in the name of innovation.

Prime Infrastructure doesn’t show my AP as being “out”, and I have yet to find any reliable way to show this condition via any other reports in PI.  If you ping it, it responds. Look at it in CDP, it’s there. But… all is not well, sir. Not at all, sir. Despite the obvious indicators. This AP that has been up and fine and doing it’s job suddenly got cabin fever:


So… the normal ways of finding out that APs are essentially out of service (like using your expensive NMS) don’t apply in this scenario, and you basically have to stumble upon it, or be alerted when users can’t connect to the AP- which unfortunately is a common canary in the coalmine when dealing with bugs in this particular framework.

Say there- did I mention that the AP never recovers in this situation? It stays in perpetual “Downloading” until you figure out a way to recover it. Value. Buy more licenses… because the one this AP is using is worthless while it’s in this innovative state of self-determinism.

No Resetting Through the Controller UI

It stands to reason that maybe rebooting the AP will get it back to where it needs to be. That’s a pretty common troubleshooting step. But you can’t do it from the controller interface while the AP is trying to go to a happy place that it will never reach.


Allow me to digress…I like to think that when the AP gets to this point, it probably hears Soul Asylum singing Runaway Train in it’s mind…

It seems no one can help me now
I’m in too deep
There’s no way out
This time I have really lead myself astray

Runaway train never going back
Wrong way on a one-way track
Seems like I should be getting somewhere
Somehow I’m neither here nor there

Ahem. Back to topic. (But what a great song.)

Off to the Switch We Go

Being that we can’t reboot THE VALUE from the controller interface while the AP is riding the runaway train, we need to visit the switch for command line operations. Basically, we pull the PoE plug via command entry, then restore it (informational note: no innovation licenses are required to enter commands- yet).


If all goes well, a couple of minutes later you’ll have an AP that has atoned for it’s separatist thoughts of independence and freedom, and you can welcome it back to the fleet.


Simple Fix (Maybe)

I’m guessing that you’d agree after reading this that the fix for my situation was fairly easy. I’ve seen maybe 20 of these goofball 3702 instances in the last year, now more reliably found after my office mate found a way to poll them with some degree of success via SNMP using AKIPS.

day downloading

So… finding them may be harder than fixing them, depending on how you are equipped and IF you are dealing exactly with whatever nuanced issue I happen to have in play. But let me again bring you back to this Cisco doc on the topic of corrupt AP flash. Your situation may end up being a lot messier than mine, given the hoops mentioned in the document.

The Network is Code: Cisco at MFD4

It’s always a bit of a thrill to visit Cisco HQ, and to step within the walls of this global network powerhouse. I got to do that again at Mobility Field Day 4, and as usual the presentations and the visit just went too fast. Such is the way these events go… On this go round, Cisco offered us:

Each is interesting and informative, especially when combined with the delagates questions. You’ll be glad you watched them, if you haven’t yet.

But something else jumped out at me at this event, and it may seem silly to even mention. Have a look at this sticker:
Code Pic

The wording of it got my mind working. In a number of directions.

I’m just sharing what’s in my head as a long-time Cisco wireless customer as I ponder the message on that innocous sticker.

I’m glad to see that CODE is the network, because it hasn’t always been. CODE, as presented like this, implies “reliable code, as surely you don’t want an unreliable network”. To that I would add “especially at the costs charged for licensing the hell out of everything”.  The sticker mentions CODE + the 9000 Catalyst Series, and perhaps sends the message that it’s a new day for reliability? On that topic, the CODE in this case is IOS-XE, which displaces AireOS as what powers the Cisco line of wireless controllers. I do hear often that “IOS-XE has been out a long time so it has to be solid by now” kinda talk.

I’m not sure I buy into that, but am hopeful. If I’m a little skeptical, it’s because IOS-XE packaged as a wireless controller brain is a new paradigm, despite the maturity of the OS. And… despite many, many mea culpa  sessions in private with Cisco’s wireless business unit through the years over wireless code quality, I have yet to see any sort of public-facing commitment to not repeat the development sins of the past as the new magic seeks to gain traction. This bothers me, in that I don’t know that the background culture that allowed so many problems with the old stuff isn’t being carried over into the new. My problem, I know. But I’m guessing I’m not alone with this feeling.

The other thing thing that this sticker has me thinking about is this: if  the network is code, why do I need controller hardware? Yes, I know that the 9800 WLC can run in VM- but VM instances ultimately run on hardware. As a big Cisco customer with thousands of 802.11ac access points that run the latest AP operating system, I would love to be totally out of the controller business (and all the various management servers needed) WHILE KEEPING MY INSTALLED ACCESS POINTS. If the network is code, maybe let me point these things at my Meraki cloud and simplify life?

I’m just one man, with opinions. But that sticker did get me thinking…


Cisco ALMOST Gave Us a Handy WLC Feature

Alas, my strained relation with Cisco wireless controllers rolls on. My 8540s are on super-wonderful-we-REALLY-tested-it-this-time code, yet *gasp* I’m looking at yet another bug-driven upgrade. Or I can just disable MU-MIMO as listed in the work-around and yet again not use what I paid for! But you don’t care about that rot, as that has nothing to do with the point of this blog. That was just pre-content bitching, as an added bonus.

Let’s get on to the meat and taters of it all.

Take a look at this:


When I’m in the WLC interface, there are various ways to sort for specific APs or groups of APs. The ability to search on Speed is fairly new, and if it’s not obvious it’s talking about the wired connectivity of the AP and is relevant where mGig switch ports are in use. That’s fairly innocuous, yes?


Suppose that Speed search gave another option- for 100 Mbps? Some of you know where this is going…

In a perfect world, all 3000+ access points on this 8540 would connect to their switch ports at Gig or mGig, depending on models of hardware in play. But the world isn’t perfect. Occasionally, some of those thousands of APs connected to hundreds of switches for whatever reason only connect at 100 Mbps. More often than not, that’s indicative of a cabling issue. Once in a great while it’s a switch misconfig or a bad AP in play.

As is, there is no easy way to find those APs that have joined at 100 Mbps in the controller. An AP that connects at 100 Mbps doesn’t trigger a fault. You can’t sort on the speed column, and basically have to wade through almost 50 pages in my case looking for the elusive 100 in a sea of 1000s in the speed column.

Boy it would have been handy if the developers gave us a 100 Mbps option in that Speed search.


A Damn Handy Catalyst Switch Command

When it comes to working with Cisco’s Catalyst switches, there are a handful of commands that get used pretty frequently to tell what’s going on.  I’m talking about after configuration is done, and when you come back to a switch later on for whatever reason to troubleshoot or verify operational parameters. I won’t be telling you anything here that isn’t already in a slew of Cisco docs, but I am working up to a specific point.

These are very common in my world:

  • Show interface (status, counters, errors, etc)
  • Show power inline (PoE info)
  • Show CDP neigh/show LLDP neigh (connected network devices)
  • Show mac address-table (L2 addresses of connected devices)
  • Show log
  • Show VLAN (VLAN database for the switch)
  • Show run (how the switch is configured)

The list goes on, and as most of you reading this know there are also variations of the commands listed that get you more granular information- like detailed information per single interface, expanded CDP details, only the last so many log entries, etc.

Big deal, right? This is pretty basic stuff, I realize. But at the same time, I do feel compelled to give a call-out to one command that I’ve come to truly appreciate:

show interface switchport

This gem tells you a lot about an individual interface and is handy as heck when odd things might be afoot with VLANs. (It recently helped me get to the bottom of a VLAN issue involving the murky mystical VLAN 1 on a Catalyst 3650).

Here’s one instance from a production switch:

#sh interfaces gig 1/0/32 switchport
Name: Gi1/0/32
Switchport: Enabled
Administrative Mode: trunk
Operational Mode: down
Administrative Trunking Encapsulation: dot1q
Negotiation of Trunking: On
Access Mode VLAN: 1 (default)
Trunking Native Mode VLAN: 1 (default)
Administrative Native VLAN tagging: enabled
Voice VLAN: none
Administrative private-vlan host-association: none
Administrative private-vlan mapping: none
Administrative private-vlan trunk native VLAN: none
Administrative private-vlan trunk Native VLAN tagging: enabled
Administrative private-vlan trunk encapsulation: dot1q
Administrative private-vlan trunk normal VLANs: none
Administrative private-vlan trunk associations: none
Administrative private-vlan trunk mappings: none
Operational private-vlan: none
Trunking VLANs Enabled: 8,170
Pruning VLANs Enabled: 2-1001
Capture Mode Disabled
Capture VLANs Allowed: ALL

Protected: false
Unknown unicast blocked: disabled
Unknown multicast blocked: disabled
Appliance trust: none

Now contrast that with the simpler [sh run interface] command for the same port:

interface GigabitEthernet1/0/32
description pci test or ACS
switchport trunk allowed vlan 8,170
switchport mode trunk
storm-control broadcast level pps 2k 1.5k
storm-control action shutdown
storm-control action trap
service-policy output TACTEST

So, the [show run] command just scrapes the surface of the actual  bigger VLAN paradigm in play for interface, while [show interface switchport] brings all of the VLAN-specific information out into the open, possibly revealing parameters not obvious through the other commands.

It’s the little things, sometimes… I like this command a lot where multiple VLANs are in use.

The Other Intent-Based Networking

Anyone who is in networking and who knows me is likely aware that I find a fair amount of fault with “Intent-Based Networking”. It has rubbed me wrong since I first heard it as the latest Cisco campaign, having been through many other flavors-of-the-month through the years. I’ve struggled to find within myself exactly what about Intent Based Networking has been pissing me off, but admit that this bogeyman in my mind has been elusive… very hard to pin down. Yet something has been stuck in my craw, I tellya.

Is it the sea of buzzwords that came with it? Is it the coincidental timing of this blog that asks us to swallow that subscriptions somehow equal innovation? (Sorry Cisco- that is a ridiculous stretch, even for you). Or this article in the same time frame telling the world all the ways Cisco is turning up the marketing heat? Sure, put it all together and to me- a customer frustrated by code bugs, feature bloat, corporate bloat, mixed messages at various Cisco levels, and the way that staying a large Cisco customer smells more expensive now than it ever has- and all of that adds to the feeling of being smothered a bit. But even all of THIS isn’t the root of my revulsion at Intent-Based Networking.

But I figured out what is bugging me about Intent-Based Networking. (It came to me like a bolt out of the blue when I was playing Sock Guy with my pug dog.)

Before I get there, let’s take a detour to this Network World Article. I have only recently learned that Intent Based Networking is not just an obnoxious marketing slogan from Cisco, but it’s also recognized as a bigger thing that I had simply never heard of in this context by that name. From the article by Brandon Butler:

Gartner Research Vice President Andrew Lerner says intent-based networking systems (IBNS) are not new, and in fact the ideas behind IBNS have been around for years. What’s new is that machine learning algorithms have advanced to a point where IBNS could become a reality soon. Fundamentally, an IBNS is the idea of a network administrator defining a desired state of the network, and having automated network orchestration software implement those policies.

“IBNS is a stark departure from the way enterprise networks are managed today,” Lerner explains in a research note describing IBNS. “Currently, translation is manual, and algorithmic validation is absent… Intent-based networking systems monitor, identify and react in real time to changing network conditions.”

It goes on to say that IBNS, as a generic construct, has four basic aspects: Translation and validation, Automated implementation, Awareness of state, and Assurance and dynamic optimization/remediation.  Those don’t belong to Cisco, they are the make-up of the general concept of Intent Based Networking. It’s a good article and worth reading.

So back to my angst and irritation. I’ve identified two-co-equal notions that steam my clams when I hear Intent Based Networking, as laid on thick by Cisco.

#1 Irritant. I, and others, have written about being a bit insulted by “AI” as a fix to everything in networking. No one with common sense and a pulse denies that machine learning and artificial intelligence aren’t powerful concepts that can be transformative if implemented right. But… Cisco, Mist, and others tend to send the vibe “our shit is great because of AI and machine learning- we have the right buzzwords and those buzzwords alone would have your wallet salivating! Without this new magic, you suck and your networks suck and you are lost at sea and you have soooooo many problems!”

The problems with that? Some of us design and run really good networks and aren’t thirsting for some mystical deity to come scrape the dumb off of our asses. And… many of the companies and individuals behind the new network magic don’t have stellar track records of getting code and actual customer needs and wants right. To be forced into Intent-Based Networking as the only real evolutionary option does create some discomfort. The new stuff is priced way too high for what is and will remain essentially beta quality in many cases.

#2 Irritant. I’ve heard nothing in Cisco’s marketing about the other Intent-Based Networking. This is the one where CUSTOMER INTENT is for the network to actually and predictably work, with minimal code bugs, free of a gimmicky feel, and with a price structure that doesn’t write out the words “Fleece the Customer” in the sky with a smoke-writing bi-plane. What about OUR intent? Stability, predictability, and no bullshitty licensing paradigms that make sure we never really own what we buy- pretty sure that summarizes the intent of most customers… Like having a network that isn’t the cause of most of it’s own problems by the vendor not shipping problematic code? That’s intuitive, no?

Sometimes words are just words, but put “Intent Based” next to “Networking” and Maslow comes to mind- the foundationally important stuff is what the customer thinks about first.

THIS “Intent Based Networking” is more important than the other one from where I sit. The two notions don’t have to be mutually exclusive, but it feels like they are right now. From the customer perspective, we don’t just pivot from years of erratic code and odd TAC engagements to a brave new expensive and Intent-based world without great skepticism because Cisco’s new marketing army says it’s the thing to do. Tone it down and and talk WITH us, not AT us.

There- now we’ve solved it. I actually feel better getting it out.

(And don’t even get me going on the Network. Intuitive.)


Figuring Out What Bothers Me About Wi-Fi and “Analytics”

I’ve been to the well, my friends. And I have drank the water. 

I was most fortunate in being a participant in the by-invitation Mobility Field Day 3 event, this past week. Few events get you this close to so many primary WLAN industry companies and their technical big-guns, on such an intimate level and on their own turf. For months leading up to MFD3, something  has been bothering me about the discreet topic of “analytics” as collectively presented by the industry- but I haven’t been able to nail down my unease until this past week.

And with the help of an email I received on the trip back east after Mobility Field Day was over.

Email Subject Line: fixing the wifi sucks problem

That was the subject in the email, sent by an employee of one of the companies that presented on their analytics solution at MFD3 (Nyansa, Cisco, Aruba Networks, Fortinet, and Mist Systems all presented on their own analytics platforms). The sender of this email knew enough about me to do a little ego stroking, but not enough to know that only a matter of hours earlier I was interacting with his company’s top folks, or that I’ve already had an extensive eval with the product he’s pitching at my own site. No matter… a polite “no thanks” and I was on my way. But his email did ring a bell in my brain, and for that I owe this person a thank you.

The subject line in that email set several dominoes of realization falling for me. For example-  at least some in the WLAN industry are working hard to plant seeds in our minds that “your WLAN sucks. You NEED us.” Once that hook is set, their work in pushing the fruits of their labor gets easier. The problem is, all of our networks don’t suck. Why? These are just some of the reasons:

  • Many of our wireless networks are well-designed by trained professionals
  • Those trained professionals often have a lot of experience, and wide-ranging portfolios of successful examples of their work
  • Many of our WLAN environments are well-instrumented with vendor-provided NMS systems, monitoring systems like Solar Winds and AKIPS, and log everything under the sun to syslog power-houses like Splunk
  • We often have strong operational policies that help keep wireless operations humming right
  • We use a wealth of metrics to monitor client satisfaction (and dis-satisfaction)

To put it another way: we’re not all just bumbling along like chuckleheads waiting for some Analytics Wizard in a Can to come along and scrape the dumbness off of our asses.

In all fairness, that’s not a global message that ALL vendors are conveying.  But it does make you do a double-take when you consider that a whole bunch of data science has gone into popping up a window that identifies a client that likely needs a driver update, when those of us who have been around awhile know how to identify a client that needs a driver update by alternate means.  Sure, “analytics” does a lot more, but it all comes as a trade-off (I’ll get into that in a minute) and can still leave you short on your biggest issues.

Like in my world, where the SINGLE BIGGEST problem since 2006, hands-down and frequently catastrophic, has been the buggy nature of my WLAN vendor’s code. Yet this vendor’s new analytics do nothing to identify when one of it’s own bugs has come to call. That intelligence would be a lot more useful than some of the other stuff “analytics” wants to show.

Trade-Offs Aplenty

I’m probably too deep into this article to say “I’m really not trying to be negative…” but I’ll hazard that offering anyways. Sitting in the conference rooms of Silicon Valley and hearing from many of the industry’s finest Analytics product’s management teams is impressive and its obvious that each believes passionately in their solutions. I’m not panning concepts like AI, machine learning, data mining, etc as being un-useful as I’d be an idiot to do so. But there is a lot of nuance to the whole paradigm to consider:

  • Money spent on analytics solutions is money diverted from elsewhere in the budget
  • Another information-rich dashboard to pour through takes time away from other taskings
  • Much of the information presented won’t be actionable, and you likely could have found it in tools you already have (depending on what tools you have)
  • Unlike RADIUS/NAC, DHCP/DNS, and other critical services, you don’t NEED Analytics. If you are so bad off that you do, you may want to audit who is doing your network and how

Despite being a bit on the pissy side here, I actually believe that any of the Analytics systems I saw this week could bring value to environments where they are used, in an “accessory” role.  My main concerns:

  • Price and recurrent revenue models for something that is essentially an accessory
  • How well these platforms scale in large, complicated environments
  • False alarms, excessive notifications for non-actionable events and factors
  • Being marketed at helpdesk environments where Tier 1 support staff have zero clue how to digest the alerts and everything becomes yet another frivolous trouble ticket
  •  That a vendor may re-tool their overall WLAN product line and architecture so that Analytics is no longer an accessory but a mandatory part of operations- at a fat price
  • Dollars spent on big analytics solutions might be better allocated to network design skills,  beefy syslog environments, or to writing RFPs to replace your current WLAN pain points once and for all
  • If 3rd party analytics have a place in an industry where each WLAN vendor is developing their own

If all of that could be reconciled to my liking, much of my skepticism would boil off. I will say after this last week at MFD3, both Aruba and Fortinet did a good job of conveying that analytics plays a support role, and that it’s not the spotlight technology in a network environment.

Have a look for yourself at Arista,  Aruba, Cisco, Fortinet, Mist and Nyansa telling their analytics stories, linked to from the MFD3 website.

Thanks for reading.

Another Example of How Important Wire is to Wireless

A house built on a shaky foundation cannot endure. And a WLAN built on a shaky wiring foundation likewise cannot endure, I tellya. My friends, is your foundation shaky? Is it? CHECK YOUR FOUNDATION NOW. (I happen to sell foundation-strengthening herbal supplements on the side, if you need that sort of thing…)

I’ve long been a proponent of recognizing installed UTP as a vital component in the networking ecosystem. Too many people take Layer 1 for granted, and forgivable sins of of our 10 Mbps and Fast Ethernet pasts won’t fly in a Gig world. Toolmakers like Fluke Networks sell cable certification testers that take the guesswork out of whether a given cable run can be relied on to perform as expected. Don’t use one of these testers at time of cable installation, and you are only assuming you have a good station cable.

I just had an interesting situation come up that I helped a very skilled field tech with. He was working in several different small buildings, each serviced by a Cisco Catalyst Switch and a handful of 3802 802.11ac access points. The switches and cable had been in place for years, and the APs for many months, all with no issues whatsoever.

Then, we changed out the old 3560X switches for shiny new 3650s (curse you Cisco for your bizarre fascination with part numbers so close together), and suddenly some APs weren’t working any more. Between us, we checked all switch settings, POST reports, CDP tables, logs, etc- everything you can dream up on the switch. We put the APs that weren’t working back on the old switches, and they came right up. Hmmm… thoughts turned to PoE/code bugs, but then I went a-Googlin’ before consulting TAC.

I found this document that put me on the path to righteousness. Though we weren’t having “PoE Imax Errors”, a couple of nuggets jumped out at me about our new switches.

PoE Imax

Holy guacamole- We got us a situation! But wait… THERE’S MORE!

PoE Imax2

Shazam! Which, of course, translates in Esperanto to “maybe your cable is actually kind of iffy, and all the CDP stuff that happens at the milliwatt level before PoE gets delivered worked OK with your old switch but not with the new one that has the enhanced PoE controller”.

If you don’t know that the newer switch does PoE differently, you might wrongly assume that your cabling is “good” because the APs worked on it when those APs used the old switches connected to that wiring. By now, you can probably guess where I’m headed…

Our tech tested the cabling on the new-switch-problem APs and in each case found that they needed help to work with the new switch. He re-terminated and tested each, with the APs then coming up with no issues. I have no doubt that this cable was certified 10-12 years ago, but in that time a lot can happen to either end of those cables depending on the environments where they are used.

Live and learn!



Cisco’s Latest AP is Mind-Blowing (and a quick history lesson)

Aironet 4800 Access PointFeast your eyes on that little Chiclet-looking thing… No image can do justice to Cisco’s latest powerhouse AP. That innocuous looking image represents a full 5.6 pounds (2.5 kg) of all kinds of Cisco’s latest technology in the company’s new 4800-series access point. You got 4×4 802.11ac Wave 2 radio wizardry,  a built-in hyperlocation antenna array, and BLE beacon capability. And… regardless of whether you buy into Cisco’s DNA Center story, the new 4800 has a lot of DNA-oriented functionality. It’s big in size, functionality, and at least for a while- price.

You don’t need me regurgitating the entire data sheet- that can be viewed here. You’ll also want to hear the full story of the 4800 and DNA Center when you get a chance, because it’s nothing less than fascinating. (My own take: DNA-C might be revolutionary- but I’d rather see new controllers with a new WLC operating system rather than bolting DNA-C’s future-looking promise onto yesterday’s fairly buggy wireless parts and pieces. That’s just me speaking from experience- take it or leave it).

I’ve seen the 4800 with the outside cover removed, and even that is profoundly thought-provoking when your eyes take in how much is really going on with the various antennas- get a look at that if you can (I’m not comfortable sharing the images I’ve seen, not sure where NDA starts and stops on that).

So a huge access point story is afoot, and I applaud Cisco on that bad-lookin’ mammajamma. But I also got sparkley-eyed by something else fairly nerdy while looking through 4800 materials and links to other links.

Here’s a screen grab of the 4800 power specs:

4800 power

Nothing real exciting there, right? New APs generally need the latest PoE+, and we’re a few years into that story. But I somehow stumbled across this document, that shows this picture:

and it took me way back to my own early days of wireless. My WLAN career started with a 4-AP deployment of those 350s, which ran the VxWorks for an operating system and had only 802.11b radios… (cue the flashback music here).

Also included in that doc is this brief history of PoE:

PoE Hist

As I read that over, my mind goes back to all of the Cisco APs that have come and gone in my own environment- 350, 1130, 1200, 2600, 3500, 3600, 3700, and our latest in production, the 3800. In this list, there have been multiple models from the different series of AP leading to the thousands of APs that are now deployed in my world.

On the operating system side, VxWorks became IOS, and in turn AireOS. Now we have AP-COS on the latest Wave 2 APs (don’t Google “AP-COS”, most of what comes back is bug-related, sadly).

It’s interesting to reflect back, on operating systems, PoE, radio technologies, and feature sets. As Wi-Fi has gotten more pervasive, it has also gotten more complicated on every level. Seldom is the latest access point THE story any more, now it’s about all of the features that come with the whole ecosystem that the vendor wants that access point to operate in- if we as customers buy into the bigger story.  I’m not passing judgement on anything with that statement, or intentionally waxing nostalgic (well, maybe a little bit).

It’s pretty neat how one image or a certain document can suddenly flash your your entire wireless history before your eyes.

Good stuff.