Good Habits For Basic Ethernet Switchport Provisioning In A Cisco IOS Environment

An opening remark on this blog post that has grown to roughly 3,500 words is that it becomes impossible to cover every switchport command, scenario, and preference that could possibly apply to switchport provisioning. What I initially thought would be a fun little post has turned into a bit of a beast that talks through why I’ve chosen different commands based on my own experiences. I daresay that your experiences might have proven different from mine, and so your standard switchport build could legitimately look very different. Furthermore, I did give up on topic coverage after a bit. UDLD is notably absent as is LACP, but I think those are worth revisiting as their own topics in future posts perhaps. Also note that I didn’t get into the syntactic specifics of NX-OS or CatOS, which can vary from IOS.

Feel free to share your own tricks, preferences, and experiences in the comments.

Probably the most basic task a network staff member will be called upon to perform is that of provisioning a switchport. It’s not glamorous. In large shops, the task is probably automated. But for many of us, we still fire up the CLI to make a switchport do whatever it needs to do. I’ve made a number of observations, command choices, and processes over the years that have improved my switchport provisioning success rate.

Everyone lies except the CLI.

By this, I mean that if a sysadmin has asked you to provision rack 14A, Switch-2, interface Gi3/5, make sure that’s the port they really mean. Don’t take their word for it that the port is available. If the guy who ran the cable made a mistake (even if that guy is you), you might be re-provisioning a production port and causing an outage. How to check? There’s several things you can do, one or more of which will inform you.

show interface Gi3/5 status Is the port connected?
show run interface Gi3/5 Has the port been provisioned previously?
show mac address-table interface Gi3/5 Has a MAC address been learned by the switchport?
show ip arp | i mac-you-learned-above What’s the IP address you’re seeing attached to that MAC? You’ll need to execute this command from the device that’s the default gateway for the VLAN in question.

If you learn the IP, then you can go back to the sysadmin and tell him that a system with MAC and IP already lives on that port. So are they SURE that they want you to reprovision that port?

Assume nothing.

Once you’re confident that the selected port is the correct one to be working on, reset it to defaults.

default interface Gi3/5

I always reset a port to defaults before provisioning it in my environment, because I inherited a switching environment that was haphazard. I manage a few thousand ports across several environments, and the state of a given port is not predictable. The easiest way to be sure I’m not going to end up with some oddball interface configuration parameter is to reset the port to defaults.

Check for the existence of the VLAN you’re putting the port into.

show vlan id XYZ

In a mature and well-managed VTP environment, this is probably a non-issue. The VLAN isn’t likely to be missing. However, I inherited an environment where I have judged it wise to disable VTP for a number of reasons, mostly because I know my predecessor knew nothing of best practices as evidenced by the network he left me, and there’s a lot of old equipment lying around in the storage room with who knows what sort of VTP database harboring a malicious revision number. That’s made me leery of VTP, because there’s been any number of configuration surprises I’ve run into as I’ve reined in the environment over the last couple of years. Not that VTP is really the point of my comment. My point is to make sure that the VLAN exists.

Once you know that the VLAN exists, you should also verify that the spanning-tree root bridge looks correct. This will verify that the switch uplinks (often 802.1q trunks from the access switch to the distribution or core layer, but not always) are carrying the VLAN as well.

show span vlan XYZ root

The output of this command will show you over which port the root bridge is known and the priority of the root bridge. On the assumption that you’re familiar with your spanning-tree topology, this command validates that the server has a path out of the switch. In my case, I manually prune VLANs on uplinks with the “switchport trunk vlan allow” command, so it’s possible that I could have missed adding a new VLAN to a trunk. In that case, the STP root bridge would show up the local switch with a default value of 32768+. I do not use default root bridge priorities when building spanning-trees, so a root value of 32768+ indicates that the path from the access layer switch to the root bridge is broken. If you’ve left your STP topology at defaults, this won’t help you much. Expected root bridge priority values will vary by environment.

Time to provision.

Now you’re ready to provision the port. There’s 15 ways to do this, and none of them are necessarily “right”. Think of these as my preferences, backed up by my experiences. I’m going to break down the commands I use for a port that faces a server in a data center, which you are free to agree or disagree with as you like. User or phone access ports are going to be different to be sure, so don’t see my commands as a recipe for any and all ports on your network.

Philosophically, I am very controlling when provisioning server-facing switchports. I don’t like dynamic negotiation for much of anything other than speed and duplex. If you love a dynamic switchport, you won’t like some of what I’m doing.

Documentation is your friend.

I always put in a description. Always. I’m so insistent on this point that I have at times refused to enable a switchport for a sysadmin until they’ve told me the assigned hostname and IP address. They’ll rarely “get it to you later”. They don’t care about your documentation. But you do. Now, if you don’t like what I put in the description field, that’s fine – put in what works for you. Just be consistent about it. For me, the hostname and IP address are meaningful fields that show up in network management tools, as that field is discoverable via SNMP. The description field is a good first line of defense when troubleshooting a problem. True, you can’t count it always being correct if some sysadmin made a change and didn’t tell you, but I estimate that it’s 80+% accurate in most environments.

description HOSTNAME | IP ADDRESS | Purpose, other comments.

As a side note, I find huge value in documenting carrier-facing ports with the carrier name, circuit ID, LEC ID (if I know it), and support phone number. That way, I have everything I need to get a case opened right there in an alert, assuming I’ve included the interface description in the alert. That’s mighty handy if you’re offline.

What kind of a port is this?

The next issue to address is whether this is a switchport or routed port. Since we’re talking about server-facing ports, I set this manually to be a switchport, which is obligatory in certain situations anyway.


Now we need to set the mode of the switchport. Will it be an access port, or a trunk (802.1q) port?

switchport mode access


switchport trunk encapsulation dot1q
switchport mode trunk

It’s increasingly common to provision an 802.1q trunk for a server these days, especially in virtualized environments. Either way (single VLAN or 802.1q), I set the mode of the link myself. I do not rely on DTP to figure this out. Note that on switches only capable of 802.1q, the “encapsulation” command won’t be available or needed. ISL encapsulation is very rarely seen these days. For what it’s worth, you aren’t really setting an “encapsulation” when assigning a VLAN trunking method of 802.1q. 802.1q is implemented as a tag, not a wrapper, and is frequently described as tagging in the networking world.

Speed & duplex.

This seems like a good place to mention Ethernet speed and duplex. I do not like to hardset these values. For gigabit interfaces, manually setting speed and duplex is simply incorrect except in the rarest of circumstances. The reasons why could be a whole blog post to itself, and others have talked about it. Simply stated, if you are manually setting 1000/FULL out of habit for all gigabit ports, you’re doing it wrong.

For fast ethernet interfaces, I prefer to rely on autonegotiation and only set speed/duplex to overcome a duplex mismatch because autonegotiation failed. Fast ethernet duplex mismatches are far less common today than they were 10 years ago.

MTU & jumbo frames.

The maximum transmission unit size of an ethernet frame is of concern for environments that implement jumbos. How Cisco implements frame MTU varies widely by platform. Some switches do not support jumbo frames at all. Others support it globally (i.e. all ports become jumbo-capable, or none are capable). Still others support jumbos on a port-by-port basis, and require that you configure them on individual interfaces. The actual maximum size of a jumbo also varies, although if you standardize on 9,000 bytes, you’re usually safe. Layer 3 transit points tend to be challenging for jumbos, as support for routing jumbo frames varies as well.

I won’t get into a thorough treatise on jumbo frames here, but will comment that the performance improvement tends to be nominal. A very rough throughput improvement number is around 10% based on my experiences. 10% isn’t nothing, but it isn’t a lot, either. So, are jumbos worth the potential headache? Jumbos are tricky to implement, because you need to provide end-to-end support for them on all interfaces and switchports in a particular VLAN to avoid connectivity challenges between hosts. The most common implementation of jumbo frames that I have seen is on ethernet switches physically isolated from the rest of the network used for iSCSI storage. In this scenario, jumbos only have to exist in a limited, well-known portion of the network (on the switches carrying the iSCSI traffic), and thus there are no challenges related to jumbos attempting to traverse network transit points.


The next thing I like to do is disable the switchport from sending DTP frames up the line to the server. There’s no reason for these frames to be generated on a port that faces a server that I can think of.

switchport nonegotiate

You could make an argument to also disable CDP on the switchport, but I’ve found that too often, software running on the server can read the CDP frames and feed that information to a sysadmin. As this can be useful in troubleshooting, I’ve opted to leave CDP advertisements on for server-facing switchports, on the assumption that I trust that server uplinking to my switch. Note that leaving CDP on for user-facing access ports might not be a good idea if you are in a highly secure environment. Then again, Cisco phones rely on CDP in many cases, so CDP can be hard to live without. In practice, the only place I habitually disable CDP is where I know the other end of the link belongs to an untrusted party. Further note that you’ll be well-served to learn LLDP as a standards-based alternative to CDP. LLDP is increasingly available on Cisco switches.


For server-facing ports, I enable the spanning-tree “portfast” feature, which effectively tells the switch that when the link comes up, it can move to the forwarding state without going through the process of first watching for BPDUs to deduce the spanning-tree topology and what might be on the other end of the link.

spanning-tree portfast


spanning-tree portfast trunk

Now, there’s a risk to be conscious of when enabling “portfast trunk”. The assumption is that you’re uplinking to a device that requires multiple VLANs, but is not a spanning-tree device, i.e. will be generating BPDUs or be able to become a transit point (i.e. a potential loop). You need to be sure that if the device is some sort of virtual switch that the device does NOT have the ability to form a loop, which in the case of redundant links, is a real concern. Talk to your server or virtualization vendor about this. It’s not uncommon for server vendors who include switching technology in their products to offer a product guide for network administrators. For example, HP offers a “HP Virtual Connect for the Cisco Network Administrator” PDF, Virtual Connect being a network interface module that HP offers in their C-class bladecenters.

Another assumption I’m making is that you’ve set in global mode (not interface mode) “spanning-tree portfast bpduguard default”. This means that if a “portfast” port sees a BPDU come in, the switch will disable the port. This protects your environment from potential topology loops, as it’s safe to assume that if you see a BPDU flow into a port that should be facing a server, something has gone badly wrong.

Root Guard.

In spanning-tree environments, an overriding design concern is loop prevention. Cisco offers a number of protections to help with this. One straightforward protection to configure is Root Guard. Root Guard tells the switch that if it sees a spanning tree root advertisement come in on the port, move the port to a root-inconsistent state, blocking the port.

spanning-tree guard root

For access-layer ports (i.e. the network edge), you should never see root bridge advertisements flowing inbound. If you do, either there’s a loop, or someone has uplinked a new device claiming to be the root bridge. Now, you can argue that with portfast bpduguard enabled, root guard becomes redundant. If a BPDU flows into a portfast port with bpduguard enabled, the port is getting disabled anyway. That’s a fair observation. My comment is only that there’s been a time or two in my experience when bpduguard was not globally enabled when it should have been. I think of root guard as a second layer of defense.

Storm control.

Another tool I like to use is storm control. The availability of storm control will vary by switch platform and by model of line card in a chassis, but if you have it available to you, use it. What storm control does is throttle either broadcasts, multicasts or even unicasts to a rate that you set, either in bits per second or percentage of interface bandwidth. The big concern for me is broadcasts, in that an excessive number of broadcasts generated towards a port can have a negative effect on every other port in the same VLAN. Broadcasts have to be processed by every switch and host that sees them, which can impact CPU in worst case scenarios.

Storm control throttles these broadcasts so that your network is more likely to survive an unusual event. Using storm control for multicast traffic is somewhat dubious, in that there are scenarios where high flow rates of multicast traffic are legitimate. Imaging PCs with Ghost via multicast comes to mind. Storm control can react to an exceeded threshold by either shutting down the port or sending an SNMP trap, although my preference has been to throttle and trap, as opposed to shutting the port down. Storm control is too temperamental for most networks, which will see bursts of legitimate broadcasts from time to time. Check Point clusters come to mind as offenders that falsely trip the storm-control broadcast algorithm with some frequency.

storm-control broadcast level 20
storm-control multicast level 20
storm-control action trap

In this example, I’ve set broadcasts & multicasts to not exceed 20% of interface bandwidth, and the action to be an SNMP trap when they do (in addition to the normal throttling). There’s no one right answer to what storm control thresholds are appropriate. Too low, and you get a lot of false alarms. Too high, and it doesn’t help you much if you run into a topology loop or network noise generator. I tend to use 20% for access ports, and 30% for interswitch trunks. Note that a logging event is also generated when storm-control trips, so you could choose to escalate storm-control alarms via syslog instead of traps if that works better for your environment.

A final thought on storm control is that if you’re dealing with a genuine network event that’s causing unusual volumes of broadcast or multicast traffic, you’ll potentially have a correspondingly high volume of storm control alerts. Therefore, considering implementing a throttling mechanism in your alert manager for storm control events, or your inbox might come to its knees during a serious network event.


There’s a couple of options to secure a switchport using filtering. Many switches allow you to put a port ACL on the switchport to filter inbound traffic. If it appeals to you to create a unique ACL for every port, and that’s something you think you can maintain, I say go for it you crazy engineer. I just hope you don’t have suicidal tendencies. I’ve never deployed a port ACL except to resolve a very specific problem. Or to mess with someone. And that’s only ever been one ACL at a time.

Performing MAC filtering by using “switchport port-security” is an easier case to make, which Hank Preston does admirably well here. If you read that article, you know there are attacks that port-security helps to mitigate. As a purely practical argument, I’ve heard of port-security with sticky MACs being deployed as a way to prevent sysadmins from reusing a port without talking to the networking team first.

I don’t have a particular standard here that I deploy, as the appropriateness of these commands depend heavily on the environment in question. Mature networks that are adequately staffed and have well-defined security concerns could be well-served by port ACLs and/or port-security. Certainly the demands for edge security continues to grow, as evidenced by virtual firewalls that shim into virtual switches to create a security boundary at the access layer.


I do not perform congestion management on a server-facing port. My philosophy is that if a server port is full, it’s full…and it’s probably full of the same sort of traffic. There won’t necessarily be multiple classes of traffic to identify and prioritize. My preferences to resolve access-layer congestion are to scale up with a bigger pipe (10G), scale sideways with LACP (802.3ad), or physically segment traffic types with multiple server NICs. Now, in a 10G world with converged network adapters, there are DCB standards that can sort out traffic prioritization. But in what is still today the more typical world of multiple 1Gbps links feeding a server, you often find a storage NIC dedicated to storage traffic, a backup NIC dedicated to backup traffic, a user-facing NIC servicing inbound requests, etc.

That said, the access port *is* the right place to MARK traffic, i.e. populate the ToS byte with an IP precedence or (more likely) a DSCP value. The idea is to mark an IP packet as close to the source as you can, and let that marked packet traverse the network. If the packet runs into a congestion point, allow the QoS policy deployed at that congestion point sort out via the mark what that packet’s priority is. Of course, that assumes that you’re doing prioritization of traffic based on the ToS byte. Cisco QoS syntax tends to vary by platform, and as such is far beyond the scope of this already overly long article.

Verify your work & write to NVRAM.

Once you’re done applying your configuration to the port, you need to verify your work.

show run interface Gi3/5

  • Were the commands you entered accepted? There are plenty of circumstances where a missing or overriding command will prevent another command from being applied.
  • If you perform a config in a text editor and then paste it in, you can get syntax wrong and not notice until the switch rejects your command…which you might not even catch if you are pasting a large block of commands.
show interface Gi3/5 status If the interface has come up…
  • Does speed and duplex look correct?
  • Did the port become “err-disabled” perhaps?
  • Is the port in 802.1q trunking mode if appropriate?
  • Was the port assigned the correct VLAN if an access port, and not a trunk?

Finally, don’t forget to save your new configuration to non-volatile RAM. Obvious, but often forgotten when in a rush or otherwise distracted. I find that we engineers forget this step far too often.

copy run start

I’ll stop here, as we’ve gone into double-overtime. I was really aiming at the newer engineer trying to sort out the “why” of certain things in networking, and as such, I hope this peek inside my point of view on switchports was beneficial.


    • says

      Junior – I’m planning to more of these sorts of posts as time permits. We get LOTS of requests from folks earlier in their networking career who want insight into why certain things are done. I gotta tell you though – they are a lot of work to write. :-)

    • Ben Dale says

      This sounds like a bad idea. Wouldn’t bpdufilter bypass bpduguard if the port was looped back to the same switch?

        • says

          If i remember correctly, BPDU guard and BPDU filter are mutually exclusive. And i’m also pretty sure that BPDU filter blocks *incoming* BPDUs, not outgoing.  The point of BPDU filter is to prevent the port from ever being blocked, presumably because you know that no one can plug an unknown switch into it.

          • GreetingsFromPoland says

            not true, bpdufilter is also preventing the switch from sending BPDUs out of a specific port (where is enabled). bpdufilter may be dangerous… e.g. physical loop with 2 end of a copper cable plugged to the same switch = possible broadcast storm with bpdufilter enabled… err-disabled port with bpduguard enabled.

          • says

            It should be stressed BPDUFilter behaves very differently when enabled globally vs. per-port.

            In Global mode, BPDUs are still sent. If a BPDU is received, the switch automatically takes the port out of PortFast mode. This is the expected behavior and what most of us think of when BPDUFilter is mentioned.

            Configured at the port level, it does not send BPDUs. This in-effect disables spanning-tree, and creates a huge liability because connecting two bpdufilter ports will cause a loop. I would really like to know what the person at Cisco designing this behavior was thinking, because there is no valid reason I can think of for doing it this way.

  1. says

    @James Cape @Ben Dale That is one very common misconception which I also had before my CCIE studies. If you enable both BPDU-guard and BPDU-filter on a port then the filter takes preference and BPDU-guard will never see the BPDU’s. So it’s useless to use them as a combination.

    @Ethan – Good post. Just one thing that could be worth considering. The default interface command is a bit risky, at least if working on modular switches. I used it on a 7600 once but that led to QoS settings for all other ports on the same blade getting reset to default. So it should be used with care.

    • Ben Dale says

      Daniel – does it filter BPDUs inbound as well?
      My understanding was that BPDU-filter stops ports from SENDING BPDUs when portfast is enabled.  As these are access ports, then portfast will also be enabled as per Ethan’s template (and best practice), so none of these ports will SEND BPDUs.So if two ports are looped, KABOOM 

    • Johan says

      Daniel – I ´ve also seen that behavior regarding QoS settings om modular switches. Has someone seen any documentation about it?

  2. says

    I’ve had a quite experience when deploying some new networks but there were two customers managing the same network. The other customer just didn’t bother to check the desk-port so we had to run around with Fluxe and used CDP to check which switchport the desk-port attached to. Project’s closed, lessons learned, CDP rules.

  3. VV says

    Another command that is useful is

     # switchport host 

    This configures port for host device,enables Spannint Tree Portfast and disables Etherchannelin on per port basis.

    sw1(config-if)#switchport host
    switchport mode will be set to access
    spanning-tree portfast will be enabled
    channel group will be disabled

    By the way excellent write up Ethan as always.

  4. Bsciarra says

    Great article.  I manage a switching environment where the port descriptions rarely match what’s behind them.  I will use sho mac address-table interface # at the switch and sho ip arp | mac-address at the default gateway to once and for all correct the port descriptions and create some awesome documentation/diagrams.

      • Dave Noonan says

        So often that I used AutoHotKey to create shortcuts so I can type ;shmac to get “show mac-address-table | inc [Vv]lan|” and ;sharp to get “show arp | inc Proto|”.  AutoHotKey rocks for frequently used commands.

  5. says

    Yep. I’m using SolarWinds NCM for that in my world, and have needed to rely on it to recover from some situations. It’s pretty wonderful to have a database of historical configurations.

  6. Paulie says

    CDP going to. VMware host is very useful, as in vclient or vcenter on the vswitch you have a little bubble which presents the CDP information. Then there is the vmtracer in the Arista switches. :)

  7. Rizwan says

    Awsome blog, he has touched every aspect of switchport config as title name suggest.storm control feature is part i never seen in my last 4 year exp, will try to implement now

  8. Hitaesh says

    Nice, This has cleared many things, and covered up tasks to be done when configuring a new switch.

  9. says

    Really great guide. I think one very important part missing is QoS – that’s a must-have for any switch carrying VoIP traffic, especially if there’s a congested WAN involved.

    Another thing I always do in corporate environments is disable logging for desktop ports, since I don’t really care tracking down when a user plugs in.

    no logging event link-status – Don’t generate LINK-UPDOWN messages
    no logging event power-inline-status – Don’t generate ILPOWER messages

Leave a Reply

Your email address will not be published. Required fields are marked *