Tell me if this sounds familiar- you spend top dollar on brand-name networking gear, only to put in into service to have some major future bork out and cause your organization significant embarrassment. You’ve researched the product, have been cajoled into buying from a vendor that swears you’re getting a great piece of gear, and yet something catastrophic makes your deployment go sideways. You engage tech support, verify that your topology and configurations are OK, yet the suck storm still pummels the networked landscape. You’ve found yourself in The Bug Zone.
Ever been here? It gives a bloke or blokette a powerful lonely feelin’. With users in pain, managers who may or may not be sympathetic, and the little voice in the back of your head asking “what could I have done differently?” that ultimately answers itself with “maybe I shoulda cut this vendor off after the last dozen major code issues. But like a victim of domestic abuse, I keep going back for more, hoping it’ll get better.”
Does this ring familiar with anyone?
I’ve heard from a lot of individuals in the greater IT community of late about all of the many bugs they have hit, and 75% of the time the lament is accompanied by something like “the rush to get ever more features under the hood is making the whole damn thing a time bomb of suck, and it feels like QA is being short-cutted in the name of getting it to market faster”.
What if, in our support contracts, we added a section that gave us a weapon against major code bugs? Perhaps we need to become our own CSRs (Code Suck Regulators) and have it in our agreements that any major code bug that is verified to cause network downtime or significant user impact when a half-baked feature sends the network into a tailspin results in a fine of $1,000 a day until the bug is resolved by the vendor? Would code development maybe slow down a bit and QA labs be better funded, staffed, and used? Would major bugs drag out for weeks and months if the meter was running at each affected customer site? I’d also suggest making vendors keep all of their verified major bugs in plain view of the world on a vendor-neutral website that requires no login to see bug details and impact, with posting a mandatory requirement enforced by somebody or other- or again, fines are levied.
OK- I get that the networking industry and all of it’s various niches doesn’t, and won’t, ever work this way. At the same time, it’s mildly fun to think about not being victimized anymore by companies that don’t feel like they really care about their code quality after you’ve used their stuff long enough to see definite trends in significant bugs. And I am talking about SIGNIFICANT bugs- the ones that are devastating to network performance, and your organizational and personal reputations, and not just horrible misspellings or cryptic broken-English error messages on a webpage. Maybe fines aren’t the answer, but if you’ve got a better idea on how to change trend of Free-Flowing Suck when it comes to code, I’d love to hear it.
(This is where some of you are thinking- bah, just do better testing before you deploy the code that you say sucks. My reaction: yeah, good luck with that. There’s only so much you can test, and only so far you should have to go to be the vendor’s QA department.).