Wouldn’t the world be perfect if…
- QoS schemes were a global standard? Now, you can point to recommended best practices, common schemes, and the like when it comes to QoS and suggest that there are predictable ways to do QoS. That’s all well and good, but for the most part, QoS still relies on marks that are not reliably marked, queuing mechanisms that vary by vendor, carrier translation maps that vary by carrier, and sundry hocus-pocus the incantations of which vary by platform. When a link gets congested, it’s still a pain after all these years to make sure the important stuff gets delivered on time while the other stuff gets dropped. We should be past this by now in networking. We’re not even close. With the advent of 10GbE carrying converged storage and data, the issue has just gotten more poignant, with more standards getting thrown at the problem. Why do I care? Because it’s one of the central issues that causes IT managers to avoid the converged network. They’ll spend more for a dedicated storage network because they don’t trust convergence. Why? Doing a converged network right is still hard.
- IT gear had a standardized field for support information? Practically all devices expose important information about themselves via SNMP. The “location” and “contact” fields are ubiquitous. Add a “support” OID for me. Bonus points for the vendor who auto-populates the field with serial number, contract information, expiration dates, and a phone number to reach a human being.
- Oversubscription was a thing of the past? I am weary of having to architect around subscription challenges inside the data center.
- Fiber optic cable could be easily terminated by laymen with cheap tools? We’re not too far away, but we’re not close to being on par with UTP termination.
- IOS was standard across the board? Cisco has been promising a universal IOS for a long time, but what you get varies greatly by IOS version, licensing, and hardware platform. To a point, writing a code template should not have to take into consideration the hardware platform. If I can perform, say, a specific QoS function, it should be done predictably. I want the underlying hardware differences to be abstracted from me as much as possible. Cisco is making headway, but the fact remains that their product teams are siloed.
- Licensing costs didn’t make you want to bash your own head in just to get basic features in IOS? I have to spend extra to get a switch that can run full EIGRP versus stub-only? That’s honestly offensive. And let’s not even start on ASA licensing, where it takes a degree in rocketry just to figure out the right SKUs.
- The world was forced into IPv6 instead of silly stopgaps like carrier grade NAT rearing their ugly heads? While Y2K turned out to be the crisis-that-wasn’t, IPv4 depletion actually matters. And yet, IPv6 is not being rolled out globally. Not too many people even seem interested. Not even all carriers are offering the service yet. Part of the issue seems to be that the hexadecimal addressing scheme is at first inscrutable, even for IT people for are used to the arcane, and so folks are putting IPv6 off as long as they can. I can taste a world without NAT. Please don’t make me live in the land of NAT (except for specific circumstances) for another two decades. Let’s move forward.
- Tedious, repetitive network provisioning tasks could be relegated to self-service? We’ve talked about what companies like Tail-F are doing, along with industry standards of NETCONF and YANG. Therefore, I have hope that self-service provisioning could happen. I’ve built enough VPN tunnels. I’ve provisioned enough switchports. I’m looking for functions like these to be rolled up into larger provisioning tasks so that I never even need to see what happened. I’d be happy with a daily or weekly report that tells me what got lit up where.
- The latency problem could be licked once and for all? Long fat pipes are great and all, but filling them is a challenge. Can’t we start to bake intelligence into acknowledged protocols that take latency into account? Having to pump everything through WAN optimizers introduces complexity and points of failure. I know, I know – there’s caching, tokenization, etc. that are part of the WAN op magic. Clearly, WAN optimization is not just about TCP optimization. And I know you can’t change physics. But we need to get to a point where a link can be fully utilized regardless of geographical distance and without special boxes. A tough nut to crack, to be sure.
- Access to wide-area fiber was everywhere? Ultra high-speed broadband service is a tricky thing in the US. Right-of-way, legacy ownership, and a lack of centralized utilities make deployment of a high bandwidth medium such as fiber optic cabling limited to specific pockets of the nation. A lucky few have. Most have not, and I fear never will. This affects businesses and residences alike. For example, here in the somewhat rural area of New Hampshire in which I live, I have access to one serious broadband provider – my local cable operator. They have a line of coax into my house. The only other player is the local telephone company, which can only offer me the poorest of DSL service because of my distance from the central office. I’d like to see a paradigm shift in this area, and that can only come with regulatory change. Why should a change happen? Because with truly high speed service available everywhere, the sorts of information services people might consider deploying are no longer limited to the “haves.” The playing field gets leveled. Your business could become less about where you are, and more about what you do.
Do I think all of these things could happen? Do I think all of these things are even practical or possible? No, not really. Utopia is not really an expectation…it’s more of a daydream. Sometimes it’s fun to think about the “what if” scenarios.